Elevate your AI & machine learning capabilities with NVIDIA A100 PCIe GPU
Accelerate your AI & Machine Learning capabilities with the NVIDIA A100 PCIe GPU
The NVIDIA A100 Tensor core GPU delivers exceptional acceleration to power the world’s most advanced, high-performing elastic data centers for AI, data analytics, and high-performance computing. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform providing up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to adjust to shifting demands dynamically.
Enterprise-ready software for AI workloads
A critical component of the NVIDIA data center solution, the A100 GPU comprises hardware, networking, software, libraries, and optimized AI models and applications from NGC™. This end-to-end AI and HPC platform for data centers empowers researchers to deliver real-world results quickly and scale solutions to production and is also equipped with optimized software that enables accelerated computing across infrastructures.
The NVIDIA EGX™ platform incorporates NVIDIA’s key enabling technologies to facilitate the swift deployment, management, and scaling of AI workloads in modern hybrid environments.
HPC Simulations with NVIDIA A100 GPU
Scientists turn to simulations to gain insight into the world around us, and NVIDIA A100 is leading the way in unlocking next-generation discoveries. By introducing double precision Tensor Cores, A100 has provided the most significant performance boost in HPC since the advent of GPUs. The GPU’s 80GB of lighting-fast memory can reduce a 10-hour, double-precision simulation to under four hours.
In addition, HPC applications can leverage TF32 to achieve up to 11X higher throughput for single-precision, dense matrix-multiply operations. For HPC applications that deal with large datasets, A100 80GB’s additional memory can increase throughput by up to 2X with Quantum Espresso, a materials simulation. With its impressive memory capacity and bandwidth, the A100 80GB is the go-to platform for next-generation workloads.
NVIDIA A100 GPU Technical Specifications
- FP 64: 9.7 TFLOPS
- FP64 Tensor Core: 19.5 TFLOPS
- FP32: 19.5 TFLOPS
- GPU Memory: 80GB HBM2e
- GPU Memory Bandwidth: 1,935 GB/s
- Max Thermal Design Power(TDP): 300W
- Multi-Instance GPU: Up to 7 MIGs @ 10GB
- Form Factor: PCIe, Dual-slot air-cooled or single-slot liquid-cooled
- Interconnect: NVIDIA NVLink Bridge for 2 GPUs: 600 GB/s, PCIe Gen4: 64 GB/s
- Server Options: NVIDIA-Certified Systems with 1-8 GPUs