Multi-node, multi-GPU system sets new benchmark for video streaming
Claimed to offer energy efficiency for video streaming, cloud gaming and social networking, the 2U, two-node, multi-GPU server from Supermicro Computer is based on Nvidia’s A100 GPU system.
The company says that its multi-node GPU server has high performance, energy efficiency and saves resources, making it suitable for data centres for video streaming, cloud gaming and social networking applications.
The multi-node GPU-server saves up to 10 per cent of the total cost of ownership, through shared power and cooling. The 2U two-node, energy-efficient system is powered by AMD EPYC 7002 series and designed with up to 64 cores and 128 PCIe 4.0 lanes with three double-width PCIe 4.0 GPUs or six single-width PCIe GPUs at full speed per node.
The GPU server also has thermally efficient, streamlined airflow, says the company. It is also equipped with Supermicro’s advanced I/O module (AIOM) for fast and flexible networking capabilities. The system can also process massive dataflow for demanding AI/ML applications, deep learning training, and inferencing.
The multi-GPU node design allows for serviceability. The two-node drawers in the 2U system can be pulled out for service because power and cooling resources are shared. According to Supermicro, accessibility to the GPUs lowers the cost of maintenance and upgrades. This feature is of particular benefit for GPU-accelerated applications, such as cloud gaming which typically require sustained high power usage and frequent maintenance.
Supermicro Computer develops and supplies high performance, high efficiency server and storage technology. It provides its advanced server Building Block Solutions for enterprise data centre, cloud computing, artificial intelligence and edge computing systems worldwide.
Supermicro is committed to protecting the environment through its We Keep IT Green initiative and claims to provide customers with the most energy-efficient, environmentally-friendly solutions available on the market.