Akida architecture SoC places AI at the edge
Claiming to be the first company to bring a production spiking neural network architecture, the Akida Neuromorphic system-on-chip (NSoC), to market, BrainChip describes the NSoC as suitable for edge applications such as advanced driver assistance systems (ADAS), autonomous vehicles, drones, vision-guided robotics, surveillance and machine vision systems.
The Akida NSoC is small, low cost and low power, adds the company. It is scalable, allowing users to network many Akida devices together to perform complex neural network training and inferencing for many markets including cybersecurity, financial technology and agricultural technology.
“The artificial intelligence acceleration chipset marketplace is expected to surpass US$60 billion by 2025,” said Aditya Kaul, research director at Tractica. He added: “Neuromorphic computing holds significant promise to accelerate AI, especially for low-power applications. As many of the technical hurdles are resolved, the industry will see the deployment of a new class of AI-optimised hardware over the next few years.”
The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies.
BrainChip’s research has determined the optimal neuron model and training methods, bringing unprecedented efficiency and accuracy. Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing 100 times better efficiency than neuromorphic test chips from Intel and IBM. Comparisons to leading CNN accelerator devices show similar performance gains of an order of magnitude better images/second/watt running industry standard benchmarks such as CIFAR-10 with comparable accuracy.
The Akida NSoC is designed for use as a standalone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analogue signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron fabric.
Spiking neural networks are inherently feed-forward dataflows, for both training and inference. Ingrained within the Akida neuron model are innovative training methodologies for supervised and unsupervised training. In the supervised mode, the initial layers of the network train themselves autonomously, while in the final fully-connected layers, labels can be applied, enabling these networks to function as classification networks. The Akida NSoC is designed to allow off-chip training in the Akida development environment, or on-chip training. An on-chip CPU is used to control the configuration of the Akida Neuron Fabric as well as off-chip communication of metadata.
The Akida development environment is available now for early access customers to begin the creation, training, and testing of spiking neural networks targeting the Akida NSoC. The Akida NSoC is expected to begin sampling in Q3 2019.