IBM prototype SoC runs and trains AI models
IBM has released details of its first AI chip for deep learning which can run and train deep learning models faster than a general-purpose CPU. IBM Research’s Artificial Intelligence Unit (AIU) is an ASIC-designed semiconductor which can be programmed to run any type of deep learning task, including processing spoken language or words and images on a screen.
The flexibility and high precision of CPUs are well suited for general-purpose software applications, but put them at a disadvantage when it comes to training and running deep learning models which require massively parallel AI operations and the requirement to make predictions based on statistical patterns in big data sets, explained IBM.
The prototype AIU features 32 processing cores and 23 billion transistors yet is designed to be as simple as a graphics card, capable of being plugged into any computer or server with a PCIe slot.
The chip uses IBM’s approximate computing technique to reduce processing power needed to train and run an AI model, but the company pointed out that this is not at the cost of accuracy. AIU uses smaller bit formats to run an AI model at a rate which requires less memory performance. It is also designed to send data directly from one compute engine to the next, which is claimed to result in “enormous energy savings.”