Changes to the ARMv8 instruction set in coming processor cores will boost the performance of AI and machine learning by up to 50 times using a more flexible cluster of processor cores
The DynamIQ cluster technology will allow up to eight completely different cores to be used in a big.LITTLE style. The move is aimed at a wide range of applications, including driverless cars and automotive driver assistance systems as well as enterprise servers. This is driven by the need to have more AI processing locally, as demonstrated by the recent Jetson TX2 launch by NVIDIA. I would expect NVIDIA to be one of the major partners working with ARM on this for its own family of custom ARM-based cores.
“DynamIQ is the next stage, complementary to the existing technology, with up to 8 cores in a single clutser to bring a larger level of performance. Every core in this cluster can be a different implementation and a different core and that brings substantially higher levels of performance and flexibility. Along with this we have an optimised memory sub system with faster access and power saving features,” he said.
This would allow several small cores and several large cores to operate independently and switch code between the different cores depending on the processing requirements. “For example, 1+3 or 1+7 DynamIQ big.LITTLE configurations with substantially more granular and optimal control are now possible. This boosts innovation in SoCs designed with right-sized compute with heterogeneous processing that deliver meaningful AI performance at the device itself,” he said.
Announcements on partners and cores for DynamIQ are expected later this year with early silicon in 2018.
By Nick Flaherty www.flaherty.co.uk
The rest of the story is at ARM to boost processor performance by 50x with new AI instructions | Electronics EETimes: