Processor technology developer ARM has developed a suite of IP blocks that includes highly scalable co-processors tor machine learning (ML) and neural networks (NN).
The blocks are initially intended for integration into applications processors for mobile phones, with the first blocks available in April to early adopters. Full rollout will be in the middle of the year.
Having the blocks scalable allows ARM to move up or down the performance curve – from sensors and smart speakers, to mobile and smart home devices as well as IoT edge devices.
The Arm ML processor (below) is built from the ground-up, specifically for ML with over 4.6 trillion operations per second (TOPs) with a further boost of 2x-4x in effective throughput in real-world uses through intelligent data management. The power envelope is 3TOPS/W, with a 1 to 2 W profile for the chips.
The family includes the Arm Object Detection (OD) processor. It is a second-generation device, with the first generation computer vision processor already deployed in Hive security cameras. The OD processor can detect objects from a size of 50x60 pixels upwards and process Full HD at 60 frames per second in real time. It can also detect an almost unlimited number of objects per frame.
“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium,” said Rene Haas, president, IP Products Group, Arm. “New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push the boundaries of what will be possible across a broad range of devices.”
Arm NN software, when used alongside the Arm Compute Library and CMSIS-NN, is optimized for NNs and bridges the gap between NN frameworks such as TensorFlow, Caffe, and Android NN and the full range of Arm Cortex CPUs, Arm Mali GPUs, and ML processors.
Arm NN software, when used alongside the Arm Compute Library and CMSIS-NN, is optimized for NNs and bridges the gap between NN frameworks such as TensorFlow, Caffe, and Android NN and the full range of Arm Cortex CPUs, Arm Mali GPUs, and ML processors.
No comments:
Post a Comment