NVIDIA A100 PCIe 80GB Specifications for AI Enthusiasts
The NVIDIA A100 PCIe 80GB was announced in May 2020, is a formidable accelerator in the field of machine learning and artificial intelligence. Built on NVIDIA's advanced Ampere architecture, this accelerator is designed for high-performance computing, deep learning training, and inference tasks. With its massive 80 GB of HBM2e memory and superior memory bandwidth of 1,935 GB/s, it caters to the most demanding AI workloads. The inclusion of 432 tensor cores significantly accelerates machine learning applications, making it a go-to choice for researchers and data scientists. Operating at a base clock of 1065 MHz and a boost clock up to 1410 MHz, it delivers impressive computational power, capped at a maximum power consumption of 300 watts. The A100-PCIE-80GB is notable for its high FP32 performance of 19.5 TFLOPS, emphasizing its capability in handling floating-point operations efficiently.
- GPU Architecture: ampere
- Hardware-Accelerated GEMM Operations:FP16 FP32 BF16 FP8 INT8 INT4 TF32 FP64 INT1
- CUDA Compute Capability : 8
Specifications for NVIDIA A100 PCIe
Raw Performance | |
---|---|
Tensor Core Count: 432 | |
FP32 TFLOPs: 156 | |
FP16 TFLOPs: 312 | |
Int8 TOPs: 624 | |
Memory Capacity (GB): 80 | |
Memory Bandwidth (GB/s): 1935 |
Real-time NVIDIA A100 PCIe GPU Prices
Compare Price/Performance to other GPUs
References
- https://www.nvidia.com/en-us/data-center/a100/
- https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-nvidia-us-2188504-web.pdf
- https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/
- https://www.techpowerup.com/gpu-specs/a100-pcie-80-gb.c3821
- https://www.nvidia.com/en-us/data-center/tensor-cores/