NVIDIA has announced a doubling of the memory capacity of its Ampere A100 GPU to 80GB and a 25% increase in memory bandwidth, to 2TB/s. This is particularly good news for anybody involved in the world of High Performance Computing.
“Achieving state-of-the-results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before. The A100 80GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2TB per second barrier, enabling researchers to tackle the world’s most important scientific and big data challenges.”Bryan Catanzaro, VP, Applied Deep Learning Research, NVIDIA
Other features, which the 40GB and 80GB versions of the A100 share, include:
- peak Tensor Core performance of 19.5 TFLOPS at supercomputer-level FP64 precision
- 312 TFLOPS at FP32 for training general AI models
- 1,248 TFLOPS for INT8 inference
- up to 600GB per second of data to other connected GPUs using Nvidia’s third-generation NVLink
Configured as a DGX Station A100 – older units can be upgraded from 40GB to 80GB components – four or eight 80GB A100s offer 320GB or 640GB of GPU memory respectively. DGX SuperPODs come with 20 to 140 DGX A100 systems, with the Cambridge-1 supercomputer, which will be tasked with healthcare research, for example, will comprise 80 DGX A100 systems using 80GB A100s. Servers contain a 64-core AMD Epyc processor, up to 512GB of system memory, 1.92TB of internal storage for the OS and up to 7.68TB for applications and data, and multiple ports for Ethernet and displays.
For more information about the Ampere A100 accelerator, the DGX 100 system and DGX SuperPOds, drop us a line and we will put you in touch with the right people to speak to.
Source: The Register