Lattice extends sensAI software stack to add AI / ML to edge applications
Features for power-efficient AI / ML (artificial intelligence / machine learning) inferencing are incorporated into the Lattice sensAI stack have been announced by Lattice Semiconductor.
The latest version (v4.1) of the sensAI solution stack is available now and supports Lattice’s roadmap of AI-based applications. Enhancements and new features include user presence detection to automatically power on/off client devices as a user approaches or departs, attention tracking to lower a device’s screen brightness to conserve battery life when the user isn’t looking at the screen, face framing to improve the video experience in video conferencing applications and onlooker detection to realise when someone is standing behind a device; it blurs the screen to maintain data privacy.
There is also expanded application support, with improved performance and accuracy for object and defect detection applications in automated industrial systems. There is also a new hardware platform for voice and vision-based ML application development featuring an onboard image sensor, two I2S microphones, and expansion connectors for adding additional sensors.
An updated neural network compiler supports Lattice sensAI Studio, a GUI-based tool with a library of AI models that can be configured and trained for popular use cases. sensAI Studio now supports AutoML features to enable creation of ML modules based on application and dataset targets. Several of the models based on the Mobilenet ML inferencing training platform are optimised for the latest Nexus FPGA family, Lattice CertusPro-NX. The stack is compatible with other ML platforms, including the latest versions of Caffe, Keras, TensorFlow, and TensorFlow Lite.
To meet the demand for more responsive and context-aware user experiences, high quality video conferencing, and collaboration applications on client compute devices, Lattice Nexus FPGAs and the sensAI stack can be used to develop computer vision and sensor fusion applications that improve engagement, privacy, and collaboration for users. For example, a client device can leverage image data from its camera to determine if someone is standing too close behind the user and blur the screen for privacy or lengthen battery life by dimming the device’s display when it ‘sees’ the user’s attention is focused elsewhere.
“AI applications based on vision, sound, and other sensors will revolutionise the client computing experience,” believes Matt Dobrodziej, vice president of segment marketing and business development at Lattice. The sensAI solution stack supports a roadmap of edge AI applications that make client devices contextually aware of how, when and where they’re being used, he explains. The Nexus FPGAs deliver that functionality with low power consumption, he adds.
Compute devices using an AI application developed with the sensAI solution stack and running on a Lattice FPGA have a 28 per cent longer battery life in comparison to devices powering AI applications with their CPUs, Lattice reports. The sensAI solution stack also supports in field software updates to keep pace with AI algorithms and provides OEMs the flexibility to choose from different sensor and SoC technologies.
Lattice is working with AI ecosystem partner, such as Mirametrix, to develop the Lattice client compute AI experience roadmap. Its Glance attention-sensing software captures a user’s face, eyes, and gaze to understand user awareness and attention. The technology is used to create smart devices capable of more natural and immersive user experiences and device interaction, said Stephen Morganstein, Mirametrix’s vice president. “Lattice’s sensAI solution stack and low power FPGAs help developers implement novel AI capabilities that can improve a device’s battery life,” he said.