Supermicro announces Nvidia MGX-based servers
Based on Nvidia’s GH200 Grace Hopper superchip, GPU systems by Supermicro is claimed to be one of the industry’s broadest portfolios of new GPU systems based on the Nvidia reference architecture, featuring the GH200 Grace Hopper and Grace CPU superchip.
The new modular architecture is designed to standardise AI infrastructure and accelerated computing in compact 1U and 2U form factors. It is also a flexible expansion option for current and future GPUs, DPUs, and CPUs. Supermicro’s liquid cooling technology enables very high density configurations, such as a 1U two-node configuration with two Nvidia GH200 Grace Hopper superchips integrated with a high speed interconnect.
“Charles Liang, president, and CEO of Supermicro, commented “By collaborating with Nvidia, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimised for AI, including Nvidia GH200 Grace Hopper superchips, BlueField, and PCIe 5.0 EDSFF slots”.
Supermicro’s Nvidia MGX platforms are designed to deliver a range of servers that will accommodate future AI technologies. This new product line includes the air cooled ARS-111GL-NHR , with one Nvidia GH200 Grace Hopper superchip, the liquid cooled ARS-111GL-NHR-LCC, also with one GH200, the liquid cooled ARS-111GL-DHNR-LCC with two GH200 and two nodes, the ARS-121L-DNR, with two Grace CPU superchips in each of two nodes and a total of 288 cores, the 2U ARS-221GL-NR with one Grace CPU superchip and the SYS-221GE-NR, dual-socket 4th Gen Intel Xeon Scalable processors with up to four Nvidia H100 Tensor core or four Nvidia PCIe GPUs.
Every MGX platform can be enhanced with Nvidia BlueField-3 DPU and/or Nvidia ConnectX-7 interconnects for high-performance InfiniBand or Ethernet networking.
Supermicro’s 1U Nvidia MGX systems have up to two GH200 Grace Hopper superchips featuring two Nvidia H100 GPUs and two Grace CPUs. Each comes with 480Gbyte LPDDR5X memory for the CPU and 96Gbyte of HBM3 or 144Gbyte of HBM3e memory for the GPU. The memory-coherent, high bandwidth, low latency Nvidia-C2C interconnects the CPU, GPU, and memory at 900Gbytes per second which is seven times faster than PCIe 5.0, said Supermicro. The modular architecture provides multiple PCIe 5.0 x16 FHFL slots to accommodate DPUs for cloud and data management and expandability for additional GPUs, networking, and storage.
The 1U two-node design features two GH200 Grace Hopper superchips, combined with Supermicro’s proven direct-to-chip liquid cooling, which is claimed to reduce opex by more than 40 per cent, while increasing computing density and simplifying rack-scale deployment for large language model (LLM) clusters and HPC applications.
The 2U Supermicro Nvidia MGX platform supports both Nvidia Grace and x86 CPUs with up to four full-size data centre GPUs, such as the Nvidia H100 PCIe, H100 NVL, or L40S. It also provides three additional PCIe 5.0 x16 slots for I/O connectivity, and eight hot-swap EDSFF storage bays.
Supermicro offers Nvidia networking to secure and accelerate AI workloads on the MGX platform. This includes a combination of Nvidia BlueField-3 DPUs, which provide 2x 200Gbits per second connectivity for accelerating user-to-cloud and data storage access, and ConnectX-7 adapters, which provide up to 400Gbits per second InfiniBand or Ethernet connectivity between GPU servers.
Nvidia software includes AI Enterprise, enterprise-grade software that streamlines the development and deployment of production-ready generative AI, computer vision and speech AI, explained the company. The Nvidia HPC software development kit is designed for scientific computing.
Nvidia Grace superchip CPUs feature 144 cores and deliver up to twice the performance per Watt compared to today’s industry-standard x86 CPUs. Specific Supermicro Nvidia MGX systems can be configured with two nodes in 1U, totalling 288 cores on two Grace CPU Superchips to provide “ground-breaking compute densities and energy efficiency in hyperscale and edge data centres” said Supermicro.