Nvidia H100
Compare prices for Nvidia H100 across cloud providers.
Pricing
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
Lambda Labs | 1x H100 | 80GB | 26 | 200GB | $2.49 | Launch |
RunPod | 1x H100 | 80GB | 16 | 176GB | $2.69 | |
Scaleway | 1x H100 | 80GB | 24 | 240GB | $2.76 | |
Build AI | 1x H100 | 80GB | 26 | 225GB | $2.79 | |
FluidStack | 1x H100 | 80GB | 48 | 256GB | $2.89 | |
Massed Compute | 1x H100 | 80GB | 20 | 128GB | $2.98 | Launch |
RunPod | 1x H100 | 80GB | 24 | 125GB | $2.99 | |
OVH | 1x H100 | 80GB | 30 | 380GB | $2.99 | |
DataCrunch | 1x H100 | 80GB | 30 | 120GB | $3.17 | Launch |
Oblivus | 1x H100 | 80GB | 28 | 180GB | $3.26 | Launch |
Koyeb | 1x H100 | 80GB | 15 | 180GB | $3.30 | |
Build AI | 1x H100 | 80GB | 26 | 225GB | $3.85 | |
Scaleway | 2x H100 | 160GB | 48 | 480GB | $5.51 | |
Paperspace | 1x H100 | 80GB | 20 | 256GB | $5.95 | |
OVH | 2x H100 | 160GB | 60 | 760GB | $5.98 | |
DataCrunch | 2x H100 | 160GB | 90 | 240GB | $6.48 | Reserve |
DigitalOcean | 1x H100 | 80GB | 20 | 240GB | $6.74 | |
Contabo | 4x H100 | 320GB | 64 | 512GB | $10.63 | |
OVH | 4x H100 | 320GB | 120 | 1520GB | $11.97 | |
DataCrunch | 4x H100 | 320GB | 180 | 480GB | $12.96 | Reserve |
Vultr | 8x H100 | 5120GB | 224 | 2048GB | $19.99 | |
Lambda Labs | 8x H100 | 640GB | 208 | 1800GB | $27.92 | Launch |
DigitalOcean | 8x H100 | 640GB | 160 | 1920GB | $47.60 | |
AWS | 8x H100 | 640GB | 192 | 2048GB | $98.32 | |
Civo | 1x H100 | 80GB | -- | -- | On Request | |
Note: Prices are subject to change, and may vary by region and other factors not listed here. I included links to launch GPUs using Shadeform (our sponsor) so you can see if they're available right now. There's no fees to use their service and I don't get a commission when you use these links.
Specs
H100 SXM | H100 PCIe | H100 NVL | |
---|---|---|---|
FP64 | 34 TFLOPS | 26 TFLOPS | 68 TFLOPS |
FP64 Tensor Core | 67 TFLOPS | 51 TFLOPS | 134 TFLOPS |
FP32 | 67 TFLOPS | 51 TFLOPS | 134 TFLOPS |
TF32 Tensor Core | 989 TFLOPS | 756 TFLOPS | 1,979 TFLOPS |
BFLOAT16 Tensor Core | 1,979 TFLOPS | 1,513 TFLOPS | 3,958 TFLOPS |
FP16 Tensor Core | 1,979 TFLOPS | 1,513 TFLOPS | 3,958 TFLOPS |
FP8 Tensor Core | 3,958 TFLOPS | 3,026 TFLOPS | 7,916 TFLOPS |
INT8 Tensor Core | 3,958 TOPS | 3,026 TOPS | 7,916 TOPS |
GPU Memory | 80GB | 80GB | 188GB |
GPU Memory Bandwidth | 3.35TB/s | 2TB/s | 7.8TB/s |
Decoders | 7 NVDEC, 7 JPEG | 7 NVDEC, 7 JPEG | 14 NVDEC, 14 JPEG |
Max Thermal Design Power (TDP) | Up to 700W | 300-350W | 2x 350-400W |
Multi-Instance GPUs | Up to 7 MIGs @ 10GB each | Up to 7 MIGs @ 10GB each | Up to 14 MIGs @ 12GB each |
Form Factor | SXM | PCIe, dual-slot, air-cooled | 2x PCIe, dual-slot, air-cooled |
You can find the official datasheet here.
Looking for alternatives? Compare Nvidia H100 against other GPUs.