Nvidia P100
Compare prices for Nvidia P100 across cloud providers
June 12, 2025 (updated)
Launched in 2016, the Nvidia P100 is built on Pascal architecture and optimized for AI and HPC workloads. Its design focuses on energy efficiency, making it suitable for distributed computing and training mid-sized AI models.
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
![]() |
1x P100 | 16GB | 20 | 128GB | $0.28 | Source |
![]() |
1x P100 | 16GB | 12 | 56GB | $1.17 | Source |
![]() |
1x P100 | 16GB | 10 | 42GB | $1.42 | Source |
![]() |
2x P100 | 32GB | 16 | 90GB | $1.71 | Source |
![]() |
1x P100 | 16GB | 6 | 112GB | $2.07 | Source |
![]() |
3x P100 | 48GB | 24 | 120GB | $2.25 | Source |
![]() |
4x P100 | 64GB | 48 | 225GB | $2.82 | Source |
![]() |
2x P100 | 32GB | 12 | 224GB | $4.14 | Source |
![]() |
4x P100 | 64GB | 24 | 448GB | $8.28 | Source |
![]() |
4x P100 | 64GB | 24 | 448GB | $9.11 | Source |
![]() |
1x P100 | 16GB | -- | -- | On Request | Source |
![]() |
2x P100 | 32GB | -- | -- | On Request | Source |
![]() |
4x P100 | 64GB | -- | -- | On Request | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here.
Nvidia P100 specs
GPU Architecture | NVIDIA Pascal |
NVIDIA CUDA® Cores | 3,584 |
Double-Precision Performance | 4.7 TeraFLOPS |
Single-Precision Performance | 9.3 TeraFLOPS |
Half-Precision Performance | 18.7 TeraFLOPS |
GPU Memory | 16GB CoWoS HBM2 at 732 GB/s or 12GB CoWoS HBM2 at 549 GB/s |
System Interface | PCIe Gen3 |
Max Power Consumption | 250 W |
ECC | Yes |
Thermal Solution | Passive |
Form Factor | PCIe Full Height/Length |
Compute APIs | CUDA, DirectCompute, OpenCL™, OpenACC |
Source: official Nvidia P100 datasheet.