Machines are Linux and Windows virtual machines with persistent storage, GPU options, and free unlimited bandwidth. They’re designed for high-performance computing (HPC) workloads.
These machines are Intel CPU machines with 50 GB SSD by default. You can resize the machine’s storage up to 2 TB.
Name | vCPUs | RAM (GB) | Operating System |
---|---|---|---|
C1 | 1 | 0.5 | Linux |
C3 | 2 | 2 | Linux |
C4 | 2 | 4 | Windows, Linux |
C5 | 4 | 8 | Windows, Linux |
C6 | 8 | 16 | Windows, Linux |
C7 | 12 | 30 | Windows, Linux |
C8 | 16 | 60 | Windows, Linux |
C9 | 24 | 120 | Windows, Linux |
C10 | 32 | 244 | Windows, Linux |
These machines are NVIDIA GPU machines with 50 GB SSD by default. You can resize the machine storage up to 2 TB.
Name | GPU (GB) | vCPUs | CPU RAM (GB) | Operating System |
---|---|---|---|---|
GPU+ (M4000) | 8 | 8 | 30 | Windows, Linux |
P4000 | 8 | 8 | 30 | Windows, Linux |
P5000 | 16 | 8 | 30 | Windows, Linux |
P6000 | 24 | 8 | 30 | Windows, Linux |
RTX4000 | 8 | 8 | 30 | Windows, Linux |
RTX5000 | 16 | 8 | 30 | Windows, Linux |
A4000 | 16 | 8 | 45 | Windows, Linux |
A5000 | 24 | 8 | 45 | Windows, Linux |
A6000 | 48 | 8 | 45 | Windows, Linux |
V100 | 16 | 8 | 30 | Linux |
V100-32G | 32 | 8 | 30 | Linux |
A100 | 40 | 12 | 90 | Linux |
A100-80G | 80 | 12 | 90 | Linux |
H100 | 80 | 20 | 250 | Linux |
The following table displays the NVIDIA H100 hardware specs as a single chip or eight chips.
Name | GPU Memory (GB) | vCPUs | CPU RAM (GB) | NVLink Support | GPU Interconnect Speeds |
---|---|---|---|---|---|
NVIDIA H100x1 | 80 GB | 20 | 250 GB | No | |
NVIDIA H100x8 | 640 GB | 128 | 2048 GB | Yes | 3.2 Tb/s |
These machines are multi-GPU variants of NVIDIA GPU machines with up to 8 GPUs. V100-32Gx2, V100-32Gx4, A100-80Gx8, and NVIDIA H100x8 offer NVLink support.
Machine | GPUs | Operating System |
---|---|---|
GPU+x | 2 | Linux |
GPU+x | 4 | Linux |
P4000 | 2 | Linux |
P4000 | 4 | Linux |
P5000 | 2 | Linux |
P5000 | 4 | Linux |
P6000 | 2 | Linux |
P6000 | 4 | Linux |
RTX4000 | 2 | Linux |
RTX4000 | 4 | Linux |
RTX5000 | 2 | Linux |
RTX5000 | 4 | Linux |
V100-32G | 2 | Linux |
V100-32G | 4 | Linux |
A4000 | 2 | Linux |
A4000 | 4 | Linux |
A5000 | 2 | Linux |
A5000 | 4 | Linux |
A6000 | 2 | Linux |
A6000 | 4 | Linux |
A100 | 8 | Linux |
A100-80G | 8 | Linux |
H100 | 8 | Linux |
Name | Generation | CUDA Cores | GPU Memory | Memory Bandwidth | Performance (TFLOPS) | Performance with Tensor Core (TFLOPS) | vCPUs | CPU RAM |
---|---|---|---|---|---|---|---|---|
GPU+ | Maxwell | 1,664 | 8 GB | 192 GB/s | 2.6 | 8 | 30 | |
P4000 | Pascal | 1,792 | 8 GB | 243 GB/s | 5.3 | 8 | 30 | |
P5000 | Pascal | 2,560 | 16 GB | 288 GB/s | 9.0 | 8 | 30 | |
P6000 | Pascal | 3,840 | 24 GB | 432 GB/s | 12.0 | 8 | 30 | |
V100 | Volta | 5,120 | 16 GB | 900 GB/s | 14.0 | 112 (FP16) | 8 | 30 |
V100x32 | Volta | 5,120 | 32 GB | 900 GB/s | 15.7 | 125 (FP16) | 8 | 30 |
RTX4000 | Turing | 2,304 | 8 GB | 416 GB/s | 7.1 | 57 (FP32) | 8 | 30 |
RTX5000 | Turing | 3,072 | 16 GB | 448 GB/s | 11.2 | 89 (FP32) | 8 | 30 |
A4000 | Ampere | 6,144 | 16 GB | 448 GB/s | 19.2 | 153 (FP16) | 8 | 45 |
A5000 | Ampere | 8,192 | 24 GB | 768 GB/s | 27.8 | 222 (FP16) | 8 | 45 |
A6000 | Ampere | 10,752 | 48 GB | 768 GB/s | 38.7 | 309 (FP16) | 8 | 45 |
A100 | Ampere | 6,912 | 40 GB HBM2 | 1,555 GB/s | 19.5 | 156 / 312 (FP32/16) | 12 | 90 |
A100-80G | Ampere | 6,912 | 80 GB HBM2 | 1,555 GB/s | 19.5 | 312 / 624 (FP32/16) | 12 | 90 |
H100 | Hopper | 16,896 | 80 GB HBM3 | 3,350 GB/s | 67.0 | 989 (TF32) 1979 (BFLOAT16/FP16) 3,958 (FP8/INT8) |
20 | 250 |
The table below displays the NVIDIA H100 performance specs, with the A100-80G included for comparison.
Name | Generation | Type | FP32 CUDA Cores | GPU Memory | Memory Bandwidth | FP64 Tensor Core or FP32 | TF32 Tensor Core | BFLOAT16 or FP16 Tensor Core | FP8 Tensor Core or INT8 Tensor Core |
---|---|---|---|---|---|---|---|---|---|
NVIDIA H100x11 | Hopper | SXM5 | 16,896 | 80 GB HBM3 | 3.35 TB/s | 67 TFLOPS | 989 TFLOPS | 1979 TFLOPS | 3958 TFLOPS/TOPS |
NVIDIA A100-80Gx1 | Ampere | SXM4 | 6,912 | 80 GB HBM2 | 1.555 TB/s | 19.5 TFLOPS | 312 TFLOPS | 624 TFLOPS | None / 1248 TOPS |
TF32, BFLOAT16, FP16, FP8, and INT8 data types are sparse, as the data contains matrices with mostly zeros. NVIDIA H100 has a Tensor Core GPU data sheet that outlines these different data specifications. ↩︎