Nvidia B100
Compare prices for Nvidia B100 across cloud providers
Nov. 27, 2024 (updated)
The Nvidia B100 offers advanced performance for demanding AI and HPC workloads. It is targeted at research institutions and enterprise users requiring robust scalability and reliability. Nvidia markets it as a top-tier solution for pushing the boundaries of computational research.
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
![]() |
1x B100 | 192GB | -- | -- | On Request | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here. For some GPUs, I include links to Shadeform (the sponsor) so you can check if they're available right now. I don’t earn a commission when you click on these links, but their monthly sponsorship helps me keep the site running.
Nvidia B100 specs
Form Factor | 8x NVIDIA B100 SXM |
FP4 Tensor Core¹ | 112 PFLOPS |
FP8/FP6 Tensor Core¹ | 56 PFLOPS |
INT8 Tensor Core¹ | 56 POPS |
FP16/BF16 Tensor Core¹ | 28 PFLOPS |
TF32 Tensor Core¹ | 14 PFLOPS |
FP32 | 480 TFLOPS |
FP64 | 240 TFLOPS |
FP64 Tensor Core | 240 TFLOPS |
Memory | Up to 1.5TB |
NVLink | Fifth generation |
NVIDIA NVSwitch™ | Fourth generation |
NVSwitch GPU-to-GPU Bandwidth | 1.8TB/s |
Total Aggregate Bandwidth | 14.4TB/s |
¹ With sparsity.
Source: official Nvidia B100 datasheet.