IP supercharges heterogeneous SoC designs
Maximising the performance of heterogeneous multi-core SoC designs for cloud computing, automotive, mobile and IoT applications, the Gemini 3.0 cache-coherent network-on-chip IP from NetSpeed Systems can be used for ADAS, and other applications that use computer-generated sensory input such as sound, video, graphics or GPS data for computer vision, facial recognition, voice recognition and other machine learning-based capabilities.
To maximise the performance of these heterogeneous designs, SoCs need a robust on-chip-network that optimises the communication among components, shares memory and other critical resources.
Gemini 3.0 is a next-generation SoC interconnect platform that enables SoC architects to implement designs that can achieve more than 10x greater performance in a “reasonable power envelope”.
It enables system architects to perform modelling and simulation early in development before integration begins. The machine learning capabilities explore models and architecture options to give system level performance predictions.
According to the company, it is the only SoC interconnect that uses machine learning to model the system as whole. Conventional approaches tend to optimise individual sub-systems in isolation, which can result in bottlenecks and systems that are overdesigned to handle worst-case conditions. Using advanced networking algorithms to rapidly create a cache-coherent SoC interconnect results in a deadlock-free design. It also offers OEMs an easy, and cost-effective way to assemble robust heterogeneous SoCs that provide the performance necessary for rich and complex applications, says the company.
Configurability allows users to customise every component of the interconnect from IP interface to routers to topology and interface links. Gemini 3.0 supports both the ARM AMBA 5 CHI (Coherent Hub Interface) and ARM AMBA 4 AXI Coherency Extensions (ACE) on-chip interconnect standards in a single design. It also includes support for broadcast and multicast.
It supports up to 64 cache-coherent CPU clusters, GPU blocks and other coherent compute blocks and up 200 I/O coherent and non-coherent agents. It can also handle cache-coherent, I/O-coherent, and non-coherent traffic in a single SoC interconnect design platform.
System level optimisations include integrated DMA, on-chip RAM and Last Level Cache (LLC) IPs with runtime configurability.