NVIDIA H100 PCIe 80GB Specifications for AI Enthusiasts

The NVIDIA H100 PCIe 80GB is a professional graphics card designed primarily for machine learning and high-performance computing applications. Launched in March 2023, it is built on the innovative Hopper GH100 architecture and utilizes the 4 nm process by TSMC. The H100 PCIe 80GB stands out for its substantial memory capacity and high-speed memory bandwidth, making it exceptionally suitable for handling large datasets and complex machine learning models. Its tensor cores significantly accelerate machine learning operations. The Hopper architecture, as seen in the H100 and H200 GPUs, introduces several technological innovations aimed at enhancing performance in AI training and inference. These GPUs are distinguished by their massive transistor count and advanced memory technologies, like HBM3 and HBM2e, supporting up to 80 GB of memory. The H100 supports HBM2e memory, while the H200 supports the faster HBM3 memory system, which can deliver up to 3 TB/s, a significant increase over the previous generation's capabilities. A common nvidia part number for this card is 900-21010-000-000.

Specifications for NVIDIA H100 PCIe

 Raw Performance
Tensor Core Count: 456
74%
FP32 TFLOPs: 756
100%
FP16 TFLOPs: 1513
100%
Int8 TOPs: 3026
100%
Memory Capacity (GB): 80
100%
Memory Bandwidth (GB/s): 2039
100%

Real-time NVIDIA H100 PCIe GPU Prices

We're tracking 9 of the NVIDIA H100 PCIe GPUs currently available for sale. The lowest price is $25,500
Buy Now

Compare Price/Performance to other GPUs

We track real-time prices of other GPUs too so that you can compare the price/performance of the NVIDIA H100 PCIe GPU to other GPUs.
Compare GPU Price/Performance

References