Intel aims to dominate embedded AI from the network edge to the data centre
The Myriad X vision processor developed by its Movidius subsidiary is world’s first system-on-chip (SOC) shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The Neural Compute Engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time.
The chip is capable of 1 TOPS (trillion operations per second) of compute performance on deep neural network inferences, alongside a total of 4TOPS of vision processing within a 1.5W power footrpint for edge applications.
- Microsoft uses Stratix FPGAs for real time AI engine
- Qualcomm boosts its embedded AI development with Scyfer buy
“We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day,” said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. “Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices.”
In addition to its Neural Compute Engine, Myriad X combines imaging, visual processing and deep learning inference in real time with 16 programmable 128-bit VLIW vector processors that run multiple imaging and vision application pipelines simultaneously.
In addition to its Neural Compute Engine, Myriad X combines imaging, visual processing and deep learning inference in real time with 16 programmable 128-bit VLIW vector processors that run multiple imaging and vision application pipelines simultaneously.
The 16nm chip supports 16 configurable MIPI Lanes that connect up to 8 HD resolution RGB cameras directly to Myriad X to support up to 700 million pixels per second of image signal processing throughput.
There are also 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.
Related stories:
There are also 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.
Related stories:
- Another startup aims for embedded AI
- ARM takes aim at embedded AI
- NVIDIA pushes artificial intelligence into embedded designs with Jetson TX2
- Edge analytics vital for security says Greenwave
- Xilinx pushes machine learning and AI to the edge for embedded applications
- Startup aims to bring artificial intelligence to IoT nodes
No comments:
Post a Comment