Nvidia H100
Compare prices for Nvidia H100 across cloud providers
Dec. 12, 2024 (updated)
Launched in 2022, the Nvidia H100 is built on the Hopper architecture and focuses on improving AI and HPC workflows. It introduces transformer engine support, enhancing the efficiency of large model training. Nvidia positions it as a solution for enterprise-scale AI development.
Nvidia H100 prices
Based on our data, the Nvidia H100 might be available in the following cloud providers:
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
Hyperstack | 1x H100 | 80GB | 28 | 180GB | $1.90 | Source |
Hyperstack | 1x H100 | 80GB | 31 | 180GB | $1.95 | Source |
Oblivus | 1x H100 | 80GB | 28 | 180GB | $1.98 | Launch |
Lambda Labs | 1x H100 | 80GB | 26 | 200GB | $2.49 | Launch |
CUDO Compute | 1x H100 | 94GB | 4 | 16GB | $2.56 | Source |
Scaleway | 1x H100 | 80GB | 24 | 240GB | $2.60 | Source |
DataCrunch | 1x H100 | 80GB | 30 | 120GB | $2.65 | Source |
RunPod | 1x H100 | 80GB | 24 | 188GB | $2.69 | Source |
Build AI | 1x H100 | 80GB | 26 | 225GB | $2.79 | Source |
FluidStack | 1x H100 | 80GB | 48 | 256GB | $2.89 | Launch |
Massed Compute | 1x H100 | 80GB | 20 | 128GB | $2.98 | Launch |
OVH | 1x H100 | 80GB | 30 | 380GB | $2.99 | Source |
RunPod | 1x H100 | 80GB | 16 | 125GB | $2.99 | Source |
Hyperstack | 1x H100 | 80GB | 24 | 240GB | $3.00 | Source |
CUDO Compute | 1x H100 | 80GB | 12 | 48GB | $3.18 | Launch |
Koyeb | 1x H100 | 80GB | 15 | 180GB | $3.30 | Source |
DigitalOcean | 1x H100 | 80GB | 20 | 240GB | $3.39 | Launch |
The Cloud Minders | 1x H100 | 80GB | 32 | 192GB | $3.53 | Source |
Nebius | 1x H100 | 80GB | 20 | 160GB | $3.55 | Launch |
Build AI | 1x H100 | 80GB | 26 | 225GB | $3.85 | Source |
The Cloud Minders | 1x H100 | 94GB | 32 | 192GB | $4.05 | Source |
The Cloud Minders | 1x H100 | 80GB | 24 | 256GB | $4.52 | Source |
Scaleway | 2x H100 | 160GB | 48 | 480GB | $5.19 | Source |
DataCrunch | 2x H100 | 160GB | 80 | 370GB | $5.30 | Source |
OVH | 2x H100 | 160GB | 60 | 760GB | $5.98 | Source |
Paperspace | 1x H100 | 80GB | 16 | 268GB | $5.99 | Launch |
Nebius | 2x H100 | 160GB | 40 | 320GB | $7.10 | Source |
Contabo | 4x H100 | 320GB | 64 | 512GB | $10.02 | Source |
DataCrunch | 4x H100 | 320GB | 176 | 740GB | $10.60 | Source |
OVH | 4x H100 | 320GB | 120 | 1520GB | $11.97 | Source |
Nebius | 4x H100 | 320GB | 80 | 640GB | $14.20 | Source |
Vultr | 8x H100 | 640GB | 112 | 2048GB | $18.40 | Source |
DigitalOcean | 8x H100 | 640GB | 160 | 1920GB | $23.92 | Launch |
Lambda Labs | 8x H100 | 640GB | 208 | 1800GB | $23.92 | Launch |
Nebius | 8x H100 | 640GB | 160 | 1280GB | $28.39 | Launch |
AWS | 8x H100 | 640GB | 192 | 2048GB | $98.32 | Source |
Civo | 1x H100 | 80GB | -- | -- | On Request | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here. For some GPUs, I include links to Shadeform (the sponsor) so you can check if they're available right now. I don’t earn a commission when you click on these links, but their monthly sponsorship helps me keep the site running.
Nvidia H100 specs
H100 SXM | H100 PCIe | H100 NVL | |
---|---|---|---|
FP64 | 34 TFLOPS | 26 TFLOPS | 68 TFLOPS |
FP64 Tensor Core | 67 TFLOPS | 51 TFLOPS | 134 TFLOPS |
FP32 | 67 TFLOPS | 51 TFLOPS | 134 TFLOPS |
TF32 Tensor Core | 989 TFLOPS | 756 TFLOPS | 1,979 TFLOPS |
BFLOAT16 Tensor Core | 1,979 TFLOPS | 1,513 TFLOPS | 3,958 TFLOPS |
FP16 Tensor Core | 1,979 TFLOPS | 1,513 TFLOPS | 3,958 TFLOPS |
FP8 Tensor Core | 3,958 TFLOPS | 3,026 TFLOPS | 7,916 TFLOPS |
INT8 Tensor Core | 3,958 TOPS | 3,026 TOPS | 7,916 TOPS |
GPU Memory | 80GB | 80GB | 188GB |
GPU Memory Bandwidth | 3.35TB/s | 2TB/s | 7.8TB/s |
Decoders | 7 NVDEC, 7 JPEG | 7 NVDEC, 7 JPEG | 14 NVDEC, 14 JPEG |
Max Thermal Design Power (TDP) | Up to 700W | 300-350W | 2x 350-400W |
Multi-Instance GPUs | Up to 7 MIGs @ 10GB each | Up to 7 MIGs @ 10GB each | Up to 14 MIGs @ 12GB each |
Form Factor | SXM | PCIe, dual-slot, air-cooled | 2x PCIe, dual-slot, air-cooled |
Source: official Nvidia H100 datasheet.