All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Tuesday, March 21, 2017

ARM takes aim at embedded AI

ARM is taking aim at embedded artificial intelligence with a new architecture.

Changes to the ARMv8 instruction set in coming processor cores will boost the performance of AI and machine learning by up to 50 times using a more flexible cluster of processor cores

The DynamIQ cluster technology will allow up to eight completely different cores to be used in a big.LITTLE style. The move is aimed at a wide range of applications, including driverless cars and automotive driver assistance systems as well as enterprise servers. This is driven by the need to have more AI processing locally, as demonstrated by the recent Jetson TX2 launch by NVIDIA. I would expect NVIDIA to be one of the major partners working with ARM on this for its own family of custom ARM-based cores.
“By 2020 we expect to see a lot of artificial intelligence deployed from autonomous driving platforms to mixed reality,” said Nandan Nayampally, General Manager of ARM’s Compute Products Group. “Even with 5G you cannot purely rely on the cloud for machine learning or AI so as performance continues to grow it needs to fit into ever smaller power envelopes.”

Extending cluster technology into embedded designs is at the heart of the ARM strategy for future devices, he says. “We started cluster with the ARM11 4-core cluster ten years ago, and then big.LITTLE was six years ago, and we used the CoreLink SoC [fabric] to scale these into larger systems,” said Nayampally.

“DynamIQ is the next stage, complementary to the existing technology, with up to 8 cores in a single clutser to bring a larger level of performance. Every core in this cluster can be a different implementation and a different core and that brings substantially higher levels of performance and flexibility. Along with this we have an optimised memory sub system with faster access and power saving features,” he said.

This would allow several small cores and several large cores to operate independently and switch code between the different cores depending on the processing requirements. “For example, 1+3 or 1+7 DynamIQ big.LITTLE configurations with substantially more granular and optimal control are now possible. This boosts innovation in SoCs designed with right-sized compute with heterogeneous processing that deliver meaningful AI performance at the device itself,” he said.

Announcements on partners and cores for DynamIQ are expected later this year with early silicon in 2018.

By Nick Flaherty www.flaherty.co.uk

The rest of the story is at ARM to boost processor performance by 50x with new AI instructions | Electronics EETimes:


No comments: