AI accelerator chips use scalable chiplet architecture
SoCs that accelerate artificial intelligence (AI) neural network inference and training have been announced by Blue Ocean Smart System. The SoCs use FlexNoC Interconnect IP and the accompanying AI Package from Arteris.
Blue Ocean’s scalable chiplet architecture is scalable and system-configurable and supports various computing power requirements from cloud to edge computing options. The high performance, cost-effective SoCs are claimed to reduce development time with reusable and scalable building blocks.
The AI Chips are based on the Arteris Flex network on chip (NoC) interconnect to optimise write broadcast dataflow for AI inference and accelerate neural network inference and training.
The Arteris interconnect IP is used to construct complex high frequency and high bandwidth on-chip interconnects that were back end friendly for easier timing closure, disclosed Blue Ocean. “The addition of the AI Package allows us to finely tune our chip architecture using multi-cast write semantics which greatly reduce off-chip memory accesses while using little die area and consuming much less power,” said Blue Ocean Smart System’s president, John Rowland. “Our use of Arteris FlexNoC and the AI Package has been key to turning our architectural dreams into system-on-chip reality,” he added.
The company worked with the Arteris IP team to optimise the on-chip dataflow for the AI chip architecture.
Arteris says that the choice confirms the technology’s ability to tackle the high speed and on-chip bandwidth demands of AI and machine learning SoCs.
“Arteris IP is the only IP company 100% focused on creating unique on-chip interconnect technologies that accelerate the development and performance of complex AI SoC architectures,” said K. Charles Janac, president and CEO of Arteris IP.
Blue Ocean Smart System is a China-based AI SoC start-up semiconductor company. Its scalable chiplet architecture enables ASIC platforms from edge computing to cloud computing. It has three R & D centres in Nanjing, Shanghai and Taiwan.
Arteris IP provides network on chip (NoC) interconnect IP to accelerate SoC semiconductor assembly for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers for customers such as Baidu, Mobileye, Samsung, Huawei / HiSilicon, Toshiba and NXP.
Its product portfolio includes the Ncore cache coherent and FlexNoC non-coherent interconnect IP, the CodaCache standalone last level cache, and optional Resilience Package (ISO 26262 functional safety), FlexNoC AI Package, and Piano automated timing closure capabilities. Customer results obtained by using Arteris IP products include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs, says the company.