All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Tuesday, June 19, 2018

Embedded AI tools for Edge Processing and secure deployment

By Nick Flaherty www.flaherty.co.uk

NXP Semiconductors has launched a set of machine learning (ML) tools for its microcontrollers along with tools for securely updating devices in the field. 

The tools operate across the low-cost microcontrollers (MCUs) to the crossover i.MX RT processors and high-performance application processors. The ML environment allows designers to choose the optimum execution engine from among ARM Cortex cores to high-performance GPU/DSP (Graphics Processing Unit/Digital Signal Processor) complexes and tools for deploying machine learning models, including neural nets, on those engines.

Embedded Artificial Intelligence (AI) is quickly becoming an essential capability for edge processing, gives 'smart' devices an ability to become 'aware' of its surroundings and make decisions on the input received with little or no human intervention. 

The ML environment enables applications in vision, voice, and anomaly detections. The vision-based ML applications use cameras as inputs to the various machine learning algorithms of which neural networks are the most popular. Voice Activated Devices (VADs) are driving the need for machine learning at the edge for wake word detection, natural language processing, and for 'voice as the user-interface' applications. NXP sees ML-based anomaly detection (based on vibration/sound patterns) revolutionising Industry 4.0 by recognizing imminent failures and dramatically reducing down-times.

The ML environment includes free software that allows customers to import their own trained TensorFlow or Caffe models, convert them to optimized inference engines, and deploy them on NXP's breadth of scalable processing solutions from MCUs to highly-integrated i.MX and Layerscape processors.

“When it comes to machine learning in embedded applications, it’s all about balancing cost and the end-user experience. For example, many people are still amazed that they can deploy inference engines with sufficient performance even in our cost-effective MCUs,” said Markus Levy, head of AI technologies at NXP. “At the other end of the spectrum is our high-performance crossover and applications processors that have processing resources for fast inference and training in many of our customer's applications. As the use-cases for AI expand, we will continue to power that growth with next-generation processors that have dedicated acceleration for machine learning.”

Another critical requirement in bringing AI/ML capability to the edge is easy and secure deployment and upgrade from the cloud to embedded devices. The EdgeScale platform enables secure provisioning and management of IoT and Edge devices. EdgeScale enables an end-to-end continuous development and delivery experience by containerizing AI/ML learning and inference engines in the cloud, and securely deploying the containers to edge devices automatically.
Members of the ecosystem include Au-Zone Technologies and Pilot.AI. Au-Zone Technologies provides the industry’s first end-to-end embedded ML toolkit and RunTime inference engine, DeepView, which enables developers to deploy and profile CNNs on NXP’s entire SoC portfolio that includes heterogeneous mixture of Arm Cortex-A, Cortex-M cores, and GPU’s. Pilot.AI has built a framework to enable a variety of perception tasks - including detection, classification, tracking, and identification - across a variety of customer platforms, ranging from microcontrollers to GPUs, along with data collection/annotation tools and pre-trained models to enable drop-in model deployment.

No comments: