AI inference system is in place for visual inspections
Edge AI provider, Aetina has developed the AIE-CP1A-A1, a small, embedded computer designed for computer vision such as object detection, human motion detection and automated inspection.
It is powered by the programmable Blaize Pathfinder P1600 embedded system on module (SoM), which is suitable for use in challenging environments and embedded systems that need to operate in extended temperature ranges. It features dual Arm Cortex A53 processors and 16 Blaize GSP (Graph Streaming Processor) cores providing 16 TOPS AI performance. There is also low memory bandwidth and low latency, said Aetina, making it capable of accelerating neural network (NN) deployment and deep learning (DL) process.
The Pathefinder P1600 is based on Blaize’s Graph Streaming Processor (GSP) architecture. It enables energy-efficient processing power for AI inferencing workloads at the edge and plugs into a custom carrier board. It runs autonomously can be integrated within a larger system.
The AIE-CP1A-A1 is a compact, fanless design with multiple I/O ports, which can fit both commercial and industrial-grade AI-powered systems that involve vision AI-related inference. The Blaize Pathfinder also supports H.264/H.265 video encoder and decoder, making it compatible with different sensors to run quick image recognition and video analytics tasks.
Aetina offers customised carrier boards and computer cases for system integration and chip-down design service to provide other kinds of embedded computing models, such as single board computer (SBC) with the Blaize Pathfinder in addition to regular box PCs. The company also helps developers convert AI inference models to a format that can run and function normally on edge devices built with ASIC chips.
“We are now expanding our edge computing product line to bring more GPU and ASIC-based solutions to help developers build their AI systems in various verticals and industries,” said Jackal Chen, senior product manager at Aetina. “The AIE-CP1A-A1 model . . . is our first ASIC-based system. In the future, by integrating different types of chips and modules, we can offer heterogeneous computing devices that are suitable for different AI models.”
“Keeping AI processing and inferencing workloads at the edge, rather than sending data to the cloud, is critical for cost-effective and almost latency-free AI applications,” commented Dinakar Munagala, CEO and co-founder of Blaize
Aetina is developing more ASIC-based AI accelerators with M.2 and EDSFF specification for developers, speeding up the inference processes of their AI models at the edge.