Machines are Linux and Windows virtual machines with persistent storage, GPU options, and free unlimited bandwidth. They’re designed for high-performance computing (HPC) workloads.
NVIDIA H100 is a GPU built on NVIDIA’s Hopper GPU Architecture. It’s designed for large-scale artificial intelligence and high-performance computing workloads. H100 machines feature 4x100 GigE connections for private traffic. For more information about the hardware and performance specs, see the machine types specifications page, which provides detailed comparisons of various machine configurations.
NVIDIA H100x8 supports NVLink, a high-speed interconnect for multiple GPUs, while the NVIDIA H100x1 does not.
Paperspace supports the NVIDIA H100 with two configurations: a single chip (NVIDIA H100x1) and eight chips (NVIDIA H100x8). NVIDIA H100 is only available in the NY2
region. If you want to use NVIDIA H100s, you need to send a request to Paperspace support. NVIDIA H100 is offered on-demand, meaning it’s available only if capacity allows and your request is approved by Paperspace.
To get started with a pre-configured environment, consider using the ML-in-a-Box 22.04 Template, which is designed for machine learning workloads on NVIDIA H100.