Neural network accelerator brings multi-core scalability to embedded AI
The latest neural network accelerator (NNA) from Imagination Technologies has been introduced. The PowerVR Series3NX enables SoC manufacturers to optimise compute power and performance across a range of embedded markets such as automotive, mobile, smart surveillance and IoT edge devices.
A single Series3NX core scales from 0.6 to 10 Tera operations per second (TOPS), while multi-core implementations can scale beyond 160TOPS. Architectural enhancements, including lossless weight compression contribute to a 40 per cent boost in performance in the same silicon area over the previous generation, reports Imagination, giving SoC manufacturers an improvement in efficiency of nearly 60 per cent with a 35 per cent reduction in bandwidth.
Imagination also announced the PowerVR Series3NX-F (Flexible) IP configuration which customers can use to differentiate and add value to a product through the OpenCL framework.
New PowerVR tooling extensions can optimally map emerging network models, offering an ideal mix of flexibility and performance optimisation.
With Imagination’s dedicated DNN (Deep Neural Network) API, developers can easily write AI applications targeting Series3NX architecture as well as existing PowerVR GPUs. The API works across multiple SoC configurations for easy prototyping on existing devices.
The PowerVR Series3NX is available for licensing now and PowerVR Series3NX-F will be available in Q1 2019.