Nvidia L40
Compare prices for Nvidia L40 across cloud providers
April 17, 2025 (updated)
The Nvidia L40, based on the Ada Lovelace architecture, is designed for data center visualization, rendering, and AI inference workloads.
Provider | GPUs | VRAM | vCPUs | RAM | Price/h | |
---|---|---|---|---|---|---|
![]() |
1x L40 | 48GB | 8 | 94GB | $0.99 | Source |
![]() |
1x L40 | 48GB | 26 | 192GB | $0.99 | Source |
![]() |
2x L40 | 96GB | 28 | 58GB | $1.00 | Source |
![]() |
1x L40 | 48GB | 28 | 58GB | $1.00 | Launch |
![]() |
1x L40 | 48GB | 32 | 48GB | $1.25 | Source |
![]() |
4x L40 | 192GB | 28 | 58GB | $4.00 | Source |
![]() |
8x L40 | 384GB | 28 | 58GB | $8.00 | Source |
Note: Prices are subject to change and may vary by region and other factors not listed here. For some GPUs, I include links to Shadeform (the sponsor) so you can check if they're available right now. I don’t earn a commission when you click on these links, but their monthly sponsorship helps me keep the site running.
Nvidia L40 specs
GPU Architecture | NVIDIA Ada Lovelace architecture |
GPU Memory | 48GB GDDR6 with ECC |
Memory Bandwidth | 864GB/s |
Interconnect Interface | PCIe Gen4 x16 (64GB/s bi-directional) |
CUDA Cores | 18,176 |
Third-Generation RT Cores | 142 |
Fourth-Generation Tensor Cores | 568 |
RT Core Performance TFLOPS | 209 |
FP32 TFLOPS | 90.5 |
TF32 Tensor Core TFLOPS | 90.5 |
BFLOAT16 Tensor Core TFLOPS | 181.05 |
FP16 Tensor Core TFLOPS | 181.05 |
FP8 Tensor Core TFLOPS | 362 |
Peak INT8 Tensor TOPS | 362 |
Peak INT4 Tensor TOPS | 724 |
Form Factor | 4.4" (H) x 10.5" (L) - dual slot |
Display Ports | 4x DisplayPort 1.4a |
Max Power Consumption | 300W |
NVLink Support | No |
Source: official Nvidia L40 datasheet.