Nvidia P100
Compare prices for Nvidia P100 across cloud providers
March 14, 2025 (updated)
Launched in 2016, the Nvidia P100 is built on Pascal architecture and optimized for AI and HPC workloads. Its design focuses on energy efficiency, making it suitable for distributed computing and training mid-sized AI models.
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
![]() |
1x P100 | 16GB | 12 | 56GB | $1.17 | Source |
![]() |
1x P100 | 16GB | 10 | 42GB | $1.41 | Source |
![]() |
2x P100 | 32GB | 16 | 90GB | $1.71 | Source |
![]() |
1x P100 | 16GB | 6 | 112GB | $2.07 | Source |
![]() |
3x P100 | 48GB | 24 | 120GB | $2.25 | Source |
![]() |
4x P100 | 64GB | 48 | 225GB | $2.82 | Source |
![]() |
2x P100 | 32GB | 12 | 224GB | $4.14 | Source |
![]() |
4x P100 | 64GB | 24 | 448GB | $8.28 | Source |
![]() |
4x P100 | 64GB | 24 | 448GB | $9.11 | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here. For some GPUs, I include links to Shadeform (the sponsor) so you can check if they're available right now. I don’t earn a commission when you click on these links, but their monthly sponsorship helps me keep the site running.
Nvidia P100 specs
GPU Architecture | NVIDIA Pascal |
NVIDIA CUDA® Cores | 3,584 |
Double-Precision Performance | 4.7 TeraFLOPS |
Single-Precision Performance | 9.3 TeraFLOPS |
Half-Precision Performance | 18.7 TeraFLOPS |
GPU Memory | 16GB CoWoS HBM2 at 732 GB/s or 12GB CoWoS HBM2 at 549 GB/s |
System Interface | PCIe Gen3 |
Max Power Consumption | 250 W |
ECC | Yes |
Thermal Solution | Passive |
Form Factor | PCIe Full Height/Length |
Compute APIs | CUDA, DirectCompute, OpenCL™, OpenACC |
Source: official Nvidia P100 datasheet.