Myrtle.ai claims ML accelerator can result in significant data centre savings

Claimed to be the industry’s most efficient accelerator for recommendation models, SEAL could save hyperscale and tier one data centre companies hundreds of millions of dollars every year, says Mrytle.ai.

Deep learning-based recommendation models are one of the most common data centre workloads, typically used for search, news feeds, adverts and personalised content. Recommendation models contain a mix of dense and sparse features, which leads to complex memory access challenges for the majority of the time (up to 80 per cent), says the machine learning specialist. Throughput is constrained by memory bandwidth in a typical compute infrastructure, which means that expensive compute resources are highly under-utilised, says Mrytle.ai.

It has announced SEAL which accelerates the memory-bound inference operations in recommendation models. This delivers large gains in latency-bounded throughput within existing infrastructure, enabling data centre companies to scale rapidly and halve the infrastructure cost of their peak traffic capability.

Energy consumption, which is a significant challenge, can be reduced by more than half, adds Mrytle.

Peter Baldwin, CEO at Myrtle.ai, explains: “SEAL works seamlessly from within the deep learning framework PyTorch, fully preserves existing model accuracy and supports model co-location and sharding. It’s also complementary to existing compute accelerators and scalable, so adoption is as straightforward as possible.”

SEAL is available initially in the Open Compute Project M.2 accelerator module form factor, intended for use in Glacier Point carrier cards. In this format, it delivers up to 384Gbyte of DDR4 memory per carrier.

First customer evaluations are anticipated in Q3 2020.

SEAL represents the lowest power, smallest form factor, easiest-to-deploy method of adding memory bandwidth to existing infrastructure used for recommendation models, claims Mrytle.

The company has revealed that it is also reviewing alternative form factors.

Myrtle.ai optimises inference workloads such as recommendation models, recurrent neural networks and other deep neural networks with sparse features. The company’s recommendation models enable businesses to rapidly scale and improve their services while reducing capital costs and energy consumption.

Myrtle.ai is a founding member of MLCommons, the benchmarking organisation driving machine learning innovation.

http://www.myrtle.ai

Latest News from Softei

This news story is brought to you by softei.com, the specialist site dedicated to delivering information about what’s new in the electronics industry, with daily news updates, new products and industry news. To stay up-to-date, register to receive our weekly newsletters and keep yourself informed on the latest technology news and new products from around the globe. Simply click this link to register here: Softei Registration