NXP supports Glow compiler to put machine learning at the edge
NXP believes it is the first semiconductor vendor to deliver a two to three times performance jump for microcontrollers over the standard version of Glow, the open source compiler, to implement machine learning.
The its eIQ machine learning (ML) software supports the Glow neural network (NN) compiler and is claimed to deliver the industry’s first neural network (NN) compiler implementation for higher performance with low memory footprint on NXP’s i.MX RT crossover microcontrollers.
Glow can integrate target-specific optimisations, and NXP has leveraged this ability using NN operator libraries for Arm Cortex-M cores and the Cadence Tensilica HiFi 4 DSP and maximised the inferencing performance of its i.MX RT685 and i.MX RT1050 and RT1060 microcontrollers. This capability is merged into the eIQ ML software development environment, which is available within NXP’s MCUXpresso software development kit.
The Glow (Graph Lowering NN) compiler was introduced in 2018 by Facebook as an open source community project. Its aim was to provide optimisations to accelerate NN performance on a range of hardware platforms. Glow takes in an unoptimised NN and generates highly optimised code. Benefits are reduced processing and memory requirements, reports NXP.
“The standard, out-of-the-box version of Glow from GitHub is device-agnostic to give users the flexibility to compile neural network models for basic architectures of interest, including the Arm Cortex-A and Cortex-M cores, as well as RISC-V architectures,” said Dwarak Rajagopal, software engineering manager at Facebook. He added that the use of purpose-built software libraries resulting in the performance increase demonstrate the benefits of using the Glow NN compiler for machine learning applications, whether they are high-end, cloud-based machines to low-cost, embedded platforms.
As the demand for machine learning applications increase, NXP predicts that consumer device manufacturers and embedded IoT developers will need optimised machine learning frameworks for low-power edge embedded applications using microcontrollers.
With the merging of Glow into eIQ software, developers will have a comprehensive, high-performance framework that is scalable across NXP’s edge processing solutions that include the i.MX RT crossover microcontrollers and i.MX 8 application processors, says NXP. Potential uses include machine learning voice applications, object recognition and facial recognition on i.MX RT MCUs and i.MX application processors.
eIQ now includes inferencing support for both Glow and TensorFlow Lite. NXP routinely performs benchmarking activities, such as CIFAR-10, to measure performance.
NXP’s enablement for Glow is tightly coupled with the Neural Network Library (NNLib) that Cadence provides for its Tensilica HiFi 4 DSP delivering 4.8GMACs of performance. NXP cites the CIFAR-10 implementation of Glow which achieves a 25x performance advantage by using this DSP to accelerate the NN operations.
It is also included in the Arm CMSIS-NN software library, where it maximises performance and minimises the memory footprint of neural networks on Arm Cortex-M cores, said Dennis Laudick, vice president of marketing, machine learning at Arm. “Using a CIFAR-10 neural network model as an example, NXP is able to achieve a 1.8x performance advantage with CMSIS-NN. Other NN models should yield similar results, clearly demonstrating the benefits of this advanced compiler and our optimized NN operator library,” he said.
NXP’s eIQ for Glow NN compiler is available now.