Nvidia V100S
Compare prices for Nvidia V100S across cloud providers
May 6, 2025 (updated)
An upgraded version of the V100, the Nvidia V100S provides enhanced performance for AI training and HPC workloads. It supports faster data processing for demanding applications.
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
![]() |
1x V100S | 32GB | 15 | 45GB | $0.88 | Source |
![]() |
2x V100S | 64GB | 30 | 90GB | $1.76 | Source |
![]() |
1x V100S | 32GB | 15 | 45GB | $2.19 | Source |
![]() |
4x V100S | 128GB | 60 | 180GB | $3.53 | Source |
![]() |
2x V100S | 64GB | 30 | 90GB | $4.38 | Source |
![]() |
4x V100S | 128GB | 60 | 180GB | $8.76 | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here.
Nvidia V100S specs
V100 PCIe | V100 SXM2 | V100S PCIe | |
---|---|---|---|
GPU Architecture | NVIDIA Volta | NVIDIA Volta | NVIDIA Volta |
NVIDIA Tensor Cores | 640 | 640 | 640 |
NVIDIA CUDA® Cores | 5,120 | 5,120 | 5,120 |
Double-Precision Performance | 7 TFLOPS | 7.8 TFLOPS | 8.2 TFLOPS |
Single-Precision Performance | 14 TFLOPS | 15.7 TFLOPS | 16.4 TFLOPS |
Tensor Performance | 112 TFLOPS | 125 TFLOPS | 130 TFLOPS |
GPU Memory | 32 GB / 16 GB HBM2 | 32 GB HBM2 | 32 GB HBM2 |
Memory Bandwidth | 900 GB/sec | 900 GB/sec | 1134 GB/sec |
ECC | Yes | Yes | Yes |
Interconnect Bandwidth | 32 GB/sec | 300 GB/sec | 32 GB/sec |
System Interface | PCIe Gen3 | NVIDIA NVLink™ | PCIe Gen3 |
Form Factor | PCIe Full Height/Length | SXM2 | PCIe Full Height/Length |
Max Power Consumption | 250 W | 300 W | 250 W |
Thermal Solution | Passive | Passive | Passive |
Compute APIs | CUDA, DirectCompute, OpenCL™, OpenACC® | CUDA, DirectCompute, OpenCL™, OpenACC® | CUDA, DirectCompute, OpenCL™, OpenACC® |
Source: official Nvidia V100S datasheet.