On-device Tensilica AI engine boosts intelligent SoC development
To accelerate AI (artificial intelligence) SoC development, Cadence Design Systems has introduced the Tensilica AI platform which includes three supporting product families optimised for varying data and on-device AI requirements.
Catering for low, mid and high end systems, the Cadence Tensilica AI platform delivers scalable and energy-efficient on-device to edge AI processing for AI SoCs. A companion AI neural network engine (NNE) consumes 80 per cent less energy per inference and delivers more than four times the TOPS/W (Tera operations per second per Watt) compared to standalone Tensilica DSPs, claimed Cadence.
The platform is intended for intelligent sensor, IoT, audio, mobile vision / voice AI, IoT vision and ADAS (advanced driver assistance system) applications. It is claimed to deliver optimal power, performance and area (PPA) and scalability with a common software platform and is built on the application-specific Tensilica DSPs that are already used In AI SoCs for the consumer, mobile, automotive and industrial markets.
Product families are the AI Base, which includes Tensilica HiFi DSPs for audio / voice, Vision DSPs, and ConnX DSPs for radar / lidar and communications, combined with AI ISA (instruction set architecture) extensions. The AI Boost family adds a companion NNE, initially the Tensilica NNE 110 AI engine, which scales from 64 to 256G operations per second and provides concurrent signal processing and efficient inferencing.
Finally, the AI Max encompasses the Tensilica neural network accelerators (NNA) 1xx AI accelerator family (the Tensilica NNA 110 accelerator and the NNA 120, NNA 140 and NNA 180 multi-core accelerator options, with AI Base and AI Boost technology). The multi-core NNA accelerators can scale up to 32T operations per second, while future NNA products are targeted to scale to 100s of Tera operations per second.
All of the NNE and NNA products include random sparse compute to improve performance, run-time tensor compression to decrease memory bandwidth, and pruning plus clustering to reduce model size.
Comprehensive common AI software addresses all target applications, streamlining product development and enabling easy migration as design requirements evolve. Software includes the Tensilica Neural Network Compiler, which supports TensorFlow, ONNX, PyTorch, Caffe2, TensorFlowLite and MXNet for automated end-to-end code generation; Android Neural Network Compiler; TFLite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.
The NNE 110 AI engine and the NNA 1xx AI accelerator family support Cadence’s Intelligent System Design strategy, which enables pervasive intelligence for SoC design excellence, and are expected to be in general availability in Q4 2021.