On 9 November, HPE introduced its most powerful server ever, the Apollo 6500 Gen10 Plus, which can accommodate 4, 8, 10, or 16 GPUs and is designed specifically for High Performance Computing and Artificial Intelligence workloads.
The demand for HPC and AI is increasing exponentially. Companies want to analyse and exploit their data to gain and maintain a competitive advantage. GPU-accelerated computing comes into its own when deployed at scale, and here we have the building blocks businesses can use to create out their own AI supercomputers.
NVIDIA A100 GPUs
This new server uses second generation AMD EPYC™ processors and supports up to sixteen NVIDIA A100 Tensor Core GPUs, which are more than four times faster than previous generations and ensure low latency at high throughput. Third-generation Tensor Cores allow NVIDIA A100-based systems to be efficiently scaled up to thousands of GPUs. With NVIDIA Multi-Instance GPU (MIG) technology, they can be divided into seven isolated GPU instances to accelerate diverse workloads.
NVIDIA NVLink networking
NVIDIA NVLink is used to connect the GPUs so they appear and operates as a single massive GPU. Memory can migrate from GPU to GPU. A single A100 supports up to 12 NVLink connections for a total bandwidth of 600 gigabytes per second.
AMD Epyc 2 CPUs
The Apollo 6500 has space for up to 16 front-accessible low-power SAS or SATA solid-state drives, and up to six NVMe drives. A new model coming in early 2021 will support 16 NVMe drives for almost 6X greater bandwidth.
For more information, please get in touch and we will arrange for you to speak to an Apollo server expert in your area.