8U universal GPU server can train in the metaverse
Equipped with Nvidia’s H100 / A100 GPUs, Supermicro’s Universal GPU server has been added to the company’s portfolio. It is claimed to increase AI performance, with improved thermal density resulting in lower power consumption, said the company.
This is the company’s most advanced GPU server, incorporating eight Nvidia H100 Tensor Core GPUs. The advanced airflow design means that the GPU system will allow increased inlet temperatures, which in turn reduces a data centre’s overall power usage effectiveness (PUE) while maintaining the highest performance profile, claimed Supermicro.
The 8U server is designed for diverse and computationally-intensive in data centres. It has a maximum memory capacity of 8Tbyte, enabling vast data sets to be held there, allowing faster execution of AI training or HPC applications. The architecture is designed for GPU-to-GPU communication, reducing the time needed for AI training or HPC simulations. The server also includes Nvidia GPUDirect Storage, whereby data can be directly accessed by the GPUs, to further increase efficiencies.
The 8U GPU server joins the company’s existing 4U and 5U Universal GPU systems. The Universal GPU platforms support both current and future Intel and AMD CPUs, up to 400W and above, said the company.
According to Charles Liang, president and CEO, of Supermicro. “This new server will support the next generation of CPUs and GPUs and is designed with maximum cooling capacity using the same chassis”.
The airflow design reduces fan speeds, resulting in less noise in the data centre, lower power consumption, and a reduced total cost of ownership. The system supports both AC and DC power, including support for standard OCP DC rack configurations.
Supermicro supports open standards and adheres to the open power specification for quick delivery and installation, for faster time to productivity.