16Gbyte SoM with edge AI can be used in handheld devices, says Nvidia
The Jetson Orin NX 16Gbyte module is now available for purchase from Nvidia. The system on module (SoM) has a small form factor, making it suitable for low power robots, embedded applications and autonomous machines. It can be used in products like drones and handheld devices, advised Nvidia.
Target applications are in manufacturing, logistics, retail, agriculture, healthcare, and life sciences.
The Jetson Orin NX is the smallest Jetson form factor, delivering up to 100 TOPS of AI performance with power configurable between 10 and 25W. It gives developers three times the performance of the Nvidia Jetson AGX Xavier and five times the performance of the Nvidia Jetson Xavier NX.
The SoM supports multiple AI application pipelines with Nvidia Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed I/O and fast memory bandwidth. It can be used for developing systems with large and complex AI models in natural language understanding, 3D perception and multi-sensor fusion.
To showcase the leap in performance achievable by the SoM, Nvidia ran some computer vision benchmarks using the Nvidia JetPack 5.1. Testing included some dense INT8 and FP16 pre-trained models from NGC, Nvidia’s portal of enterprise services, software, management tools, and support for end-to-end AI and digital twin workflows. The same models were also run for comparison on Jetson Xavier NX.
The benchmarks for people detection, licence plate recognition, object detection and labelling and multi-person human pose estimation showed that the Jetson Orin NX delivered a 2.1 times performance increase compared to Jetson Xavier NX. With future software optimisations, this is expected to approach 3.1 times for dense benchmarks. Other Jetson devices have increased performance by 1.5 times since the first supporting software release. Similar results are anticipated for the Jetson Orin NX 16Gbyte.
Jetson Orin NX also brings support for sparsity, which will enable even greater
performance. With sparsity, developers can take advantage of the fine-grained structured sparsity in deep learning networks to increase the throughput for Tensor Core operations.
All Jetson Orin modules run the world-standard Nvidia AI software stack.
Nvidia JetPack 5.1 supports the Orin NX 16Gbyte and the latest CUDA-X stack on Jetson Orin.
The Jetson partner ecosystem supports a broad range of carrier boards and peripherals for the Jetson Orin NX 16Gbyte module, such as sensors, cameras,
connectivity modules (5G, 4G, Wi-Fi).