All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Monday, October 29, 2018

Cypress integrates Pelion IoT platform into PSoC6

By Nick Flaherty

Cypress Semiconductor has expanded its collaboration with Arm to integrates the Arm Pelion IoT Platform with Cypress’ ultra-low power, dual-core PSoC 6 microcontrollers (MCUs) and CYW4343W Wi-Fi and Bluetooth combo radios for robust wireless connectivity. PSoC 6 provides Arm v7-M hardware-based security that adheres to the highest level of device protection defined by the Arm Platform Security Architecture (PSA).

Cypress and Arm are demonstrating hardware-secured onboarding and communication through the integration of the dual-core PSoC 6 MCU and Pelion IoT Platform. The PSoC 6 MCU is running Arm’s PSA-defined Secure Partition Manager to be supported in Arm Mbed OS version 5.11 open-source embedded operating system, which will be available this December. Developers can leverage the private key storage and hardware-accelerated cryptography in the PSoC 6 MCU for cryptographically-secured lifecycle management functions, such as over-the-air firmware updates, mutual authentication, and device attestation and revocation.

“Secure device management is critical for the IoT to scale, and OEMs require solutions that help them to easily manage devices throughout their lifecycles,” said Hima Mukkamala, senior vice president and general manager, IoT Cloud Services at Arm. “By partnering with companies such as Cypress, we are enabling a more secure environment from device-to-data.”

“Cypress is making a strategic push to integrate security into our compute, connect and store portfolio for the IoT,” said Sudhir Gopalswamy, Executive Vice President of Cypress’ Microcontrollers and Connectivity Division. “Our continued collaboration with Arm is focused on delivering secure, easy-to-use solutions and is an important part of our strategy to enable IoT designers to quickly develop, deploy and manage secure IoT edge nodes.”

The PSoC 6 architecture is built on ultra-low-power 40nm process technology, and the MCUs feature low-power design techniques to extend battery life up to a full week for wearables. The dual-core Arm Cortex-M4 and Cortex-M0+ architecture lets designers optimise for power and performance simultaneously. Using its dual cores combined with configurable memory and peripheral protection units, the PSoC 6 MCU delivers the highest level of protection defined by the Platform Security Architecture (PSA) from Arm. 

Designers can use the MCU’s software-defined peripherals to create custom analogue front-ends (AFEs) or digital interfaces for system components such as electronic-ink displays. The PSoC 6 links to the CapSense capacitive-sensing technology, enabling modern touch and gesture-based interfaces that are robust and reliable. 

Saturday, October 27, 2018

Embedded Studio 4.10 for ARM reduces binary size

By Nick Flaherty

SEGGER has added a new Linker and Link-Time Optimization (LTO) to the latest release build of their its cross-platform integrated development environments, Embedded Studio for ARM and Embedded Studio for Cortex-M.

The new product version provides a significant 5-12% reduction over the previous version on typical applications, and even higher gains compared to conventional GCC tool chains. These savings are the result of the new LTO, combined with SEGGER’s Linker and Run-time library emLib-C. Through LTO, it is possible to optimise the entire application, opening the door for optimisation opportunities that are simply not available to the compiler.

A smaller executable can get the same thing done with less program memory (Flash), resulting in the ability to use smaller microcontrollers and potential cost savings.

The Linker adds features such as compression of initialised data and deduplication, as well as the flexibility of dealing with fragmented memory maps that embedded developers have to cope with. Like all SEGGER software, it is written from scratch for use in deeply embedded computing systems. Additionally, the size required by the included runtime library is significantly lower than that of runtime libraries used by most GCC tool chains.

“Our engineers have done an outstanding job! This new release of Embedded Studio for ARM and Cortex-M devices allows flash size savings on a scale I never thought possible," says Dirk Akemann, Marketing Manager at SEGGER Microcontroller. "Embedded Studio is becoming more and more popular, and we are proud to support the educational community by having Embedded Studio available free of charge for non-commercial use.”

Get more information on the new SEGGER Embedded Studio at:

Related stories:

Thursday, October 25, 2018

Researchers identify 57 categories of cyber attack

By Nick Flaherty

Researchers at the University of Kent's School of Computing and the Department of Computer Science at the University of Oxford, set out to define and codify the different ways in which the various cyber-attacks can occur, and came up with 57 different types in five key themes (shown below).

"It's been well understood that cyber-attacks can have numerous negative impacts. However, this is the first time there has been a detailed investigation into what these impacts are, how varied they can be, and how they can propagate over time," said Dr Jason R.C. Nurse from the Kent School of Computing. "This base figure of 57 underlines how damaging cyber-incidents can be and we hope it can help to better understand how a business, individual or even nation is affected by a cyber-attack. This is going to be even more relevant as everything and everyone becomes connected and the Internet of Things is fully realised."

The five themes are:
  • Physical/Digital
  • Economic
  • Psychological
  • Reputational
  • Social/societal
For embedded designers, the physical and digital attacks are of course key, but these may also have economic impacts and may come as a result of psychological or social attacks.

Each category contains specific outcomes that underline the serious impact cyber-attacks can have. For example, under the Physical/Digital category there is the loss of life or damage to infrastructure, while the Economic category lists impacts such as a fall in stock price, regulatory fines or reduced profits as a possibility.

By providing a detailed breakdown of the many different ways a cyber-attack can impact a business and third-parties, it gives engineers, board members and other senior staff a better understanding of both direct and indirect harms from cyber-attacks.

Related stories:

Renesas to launch third generation microcontrollers in RX600 and RX700

By Nick Flaherty

Renesas Electronics has launched its third-generation 32bit RX CPU core, the RXv3. 
The five stage superscalar CISC core will be used in the RX600 and RX700 families (see below) that will begin rolling out at the end of 2018, aimed at the real-time performance and enhanced stability required by motor control and industrial applications in next-generation smart factory, smart home and smart infrastructure equipment.

The RX core consolidated features from the Hitachi SH, Mitsubishi M16C and previous Renesas devices, and the RXv3 core boosts the five stage superscalar core architecture with up to 5.8 CoreMark/MHz, up from a peak of 4.55 in RXv2. It also adds options for register bank save functions and an optional double precision floating point capability but will be binary compatible with the RXv2 and RXv1 CPU cores to preserve the code base. Using a CISC core tends to give better code density. 
“The cutting-edge RXv3 core technology targets a wide range of embedded applications in the industrial IoT era where ever increasing system complexity places higher demands on performance and power efficiency,” said Daryl Khoo, Vice President Product Marketing, IoT Platform Business Division at Renesas. 

The RXv3 core will enable the first RX600 MCUs to achieve 44.8 CoreMark/mA with an energy-saving cache design that reduces both access time and power consumption during on-chip flash memory reads, such as instruction fetch.
The RXv3 core achieves significantly faster interrupt response times with a new option for single-cycle register saves. Using dedicated instruction and a save register bank with up to 256 banks, designers can minimise the interrupt handling overhead required for embedded systems operating in real-time applications such as motor control. This means the RTOS context switch time is up to 20 percent faster.

Using model-based development (MBD) has enabled the DP-FPU to help reduce the effort of porting high precision control models to the MCU. Similar to the RXv2 core, the RXv3 core performs DSP/FPU operations and memory accesses simultaneously to substantially boost signal processing capabilities.

Related stories:

Wednesday, October 24, 2018

Intel sees edge challenges in daily petabytes of data

By Nick Flaherty

In 2015, there were 15.41 billion connected Internet of Things (IoT) devices around the world. By 2020, just two years from now, that number will nearly double to 30.73 billion. Manufacturing, healthcare, and insurance are the top three industries that have the most to gain from IoT, generating a petabyte of data every single day.
Dealing with the data from these devices has become one of IT’s and IoT’s biggest challenges says Intel. In manufacturing, for example, by 2020, industrial IoT alone will generate a petabyte of data per day, including a new highly valuable data type—video.

Video - as in vision technology - is widely considered the eye of IoT. From transportation, public services, retail, industrial manufacturing, healthcare, and more, tens of millions of connected video devices across many sectors will generate massive amounts of data in both size and volume—a single raw 4k UHD frame is 8 MB. With so many cameras generating streams of 30 frames per second (fps) or more, it adds up quickly, even with today’s high-efficiency codecs. Cisco predicts that by 2021, video will be 82 percent of all IP traffic.

IoT applications and architectures are on the front lines to deal with the challenges from increasing data variety, volume, and velocity. The challenges include latency that impacts the availability of data, security of information, and cost to manage and move the data, and Intel is pushing its Xeon D processors for edge designs.

When data has to be analysed and responded to in real-time, any delay is a formula for failure. Even with data travelling on the fastest networks, massive amounts converging on a local network and then a backbone can still take many seconds to reach a data centre thousands of miles away, be analysed, and the response returned to the recipient. And, even with traffic prioritisation, volume and distance to destination can delay critical information. When the analytical response to that data involves human safety or precision machinery, a delay could be the difference between success and disaster, and so many architectures choose to keep the time-critical analytics close to the source.

Some industries have strict regulatory requirements and companies that generate highly sensitive Intellectual Property (IP) or operational data must secure and protect it. It creates a violation of personal information protection or an unacceptable business risk if the data is exposed or stolen. Cloud-based IoT solutions do not make sense for these companies; they need an in-house solution—with designed-in security and protection—where their business and operations execute, even while handling the massive amounts of data that might be generated locally.

5G mobile deployments will increase mobile network speeds, but, when every byte has a value tied to it, the cost of transporting large amounts of video across metered networks makes it prohibitive.

With issues like these, analysts predict that 45 percent of generated data will be processed, stored, and acted on at the edge by the end of next year.

The availability of advanced processing capabilities designed for the edge enables data to be handled locally instead of in the cloud, or before being sent to the cloud. These latest-generation technologies are provided for high-performance inferencing, analytics, general purpose compute, and Artificial Intelligence/Deep Learning (AI/DL) at the edge for the unique use cases presented to IoT solution and application architects.

Designing an edge compute solution for emerging applications is driven in large part by the type of data, the size of the data, the volume of data to be processed, and how fast it needs to be analysed. When it comes to combining inferencing, general compute, storing and securing of data, and analytics at the edge, Intel highlights its Xeon processors that offer two to twenty cores to match the performance needs of edge systems, with extensibility to eight processors on a platform.

These CPUs perform well for AI/DL applications and are optimised to handle new emerging use cases involving video analytics, pattern recognition, predicting outputs, and operational efficiencies. Built-in security technologies enable developers to easily design in security from the architectural stage down to implementation, using advanced encryption and compression acceleration, platform and boot integrity, and a host of other security capabilities built into the processor silicon.

It also points to the OpenVINO toolkit to fast-track the development of computer vision and deep learning inference into vision applications. Intel offers the broadest range of vision products and software tools to help OEMs, ODMs, ISV’s and system integrators scale vision technology across infrastructure, matching specific needs with the right performance, cost, and power efficiency at every point in an artificial intelligence (AI) architecture.

This supports from algorithm development to platform optimistion using industry standard APIs, frameworks, and libraries. OpenVINO allows developers to take networks created in common frameworks, like Caffe, Tensorflow, and MXNet, and optimise those on heterogeneous hardware engines.

Related stories:

Tuesday, October 23, 2018

Flash Translation Layer allows NAND memory to be used for deterministic operation in embedded designs

By Nick Flaherty

HCC Embedded in Budapest has extended its existing flash translation layer (FTL) solution for NAND with the addition of deterministic execution control. 

Engineers integrating NAND flash into safety-based systems in automotive, aerospace, and industrial applications can use HCC’s SafeFTL to ensure stable and predictable operation of the NAND flash. The deterministic SafeFTL has been fully verified both in simulated environments and on real NAND flash arrays.

Traditionally, NOR flash has been the dominant memory in highly reliable systems, but more recently engineers are integrating NAND flash into safety systems where information must be predictably available. An FTL manages an array of NAND flash to create a logical interface that software can use. This includes wear leveling, bad block handling, and the many other subtleties of managing NAND flash. However, existing FTLs all stall at some point for a variable period of time, particularly when placed under heavy load.

Safety-critical systems demand a different approach that ensures stability and predictability above all else. For these systems, where accurate time division is critical to the delivery of safety, engineers can use HCC’s deterministic SafeFTL to integrate arrays of NAND flash without disturbing the predictability of the system. 

The deterministic FTL builds on HCC’s SafeFTL by enabling the host or safety system to know how long operations will take and respond by either scheduling tasks appropriately or executing them in multiple steps. The host system gets the length of time a flash operation will take from the FTL and can schedule an appropriate time slot, or can spread complex operations over multiple time slots, while leaving the NAND flash accessible to other tasks.

“HCC has spent much of its history developing a deep understanding of flash storage technology,” said HCC Embedded CEO Dave Hughes. “Our SafeFTL has provided fail-safety and reliability to embedded systems for the last 15 years. We do this by taking a system-level approach that ensures each layer in the system has correctly defined the behaviour it requires from adjacent layers. The addition of deterministic execution control to our SafeFTL product goes a step beyond to ensure the utmost reliability and predictability in safety-critical systems.”

Monday, October 22, 2018

Silicon Labs teams with Digi for LTE-M IoT module

By Nick Flaherty

Silicon Labs has teamed up with Digi for an LTE-M expansion kit around the XBee3 pre-certified cellular modem.
The LTE-M expansion kit works with Silicon Labs’ EFM32 Giant Gecko 11 starter kit to simplify the development of gateways and end devices that operate in deep-sleep mode and require extended battery life. The kit is aimed at agricultural, asset tracking, smart energy and smart city IoT applications.

“Together, Silicon Labs and Digi International are dedicated to connecting people, networks and ‘things’ with best-in-class IoT and M2M technologies,” said Matt Johnson, Senior Vice President and General Manager of IoT products at Silicon Labs. “We’ve collaborated with Digi to deliver flexible LTE-M cellular connectivity capabilities, enabling cloud-connected applications that are remote, on the go and ready to deploy.”

“The jointly developed LTE-M expansion kit works with Silicon Labs’ starter kits to accelerate development by quickly enabling cellular IoT connectivity and avoiding costly cellular device certifications,” said Mark Tekippe, Director of Product Management, Digi International. “Digi XBee3 cellular modems and Silicon Labs Gecko MCUs are an ideal pairing to deliver seamless cloud connectivity with ultra-low power capabilities. The pre-certified Digi XBee3 cellular modem is easy to configure and provides secure, flexible out-of-box connectivity over LTE-M and NB-IoT networks.”

“LTE-M is a great option for LPWAN applications that require a combination of long battery life, LTE reliability and low latency. LTE-M is compatible with existing LTE networks and in the future will coexist with 5G technologies,” added Mike Krell, Head of IoT Strategy, J. Brehm & Associates. “Vendors offering easy-to-use development tools to accelerate LTE-M solutions will be well-positioned for growth in the cellular IoT market.”

Developers can take advantage of the development tools including the Digi Remote Manager, Silicon Labs’ Energy Profiler and pre-programmed demos. The XBee3 is certified on AT&T and Verizon cellular networks. Using the XBee allows for easy migration to NB-IoT as well as the XBee API frames, MicroPython and XCTU software tools to simplify development and Digi TrustFence for integrated device security, identity and data privacy.

The LTE-M expansion kit and EFM32 Giant Gecko 11 starter kit (SLSTK3701A) are available now, and both are priced at $99. 

Related stories:

Power news this week

By Nick Flaherty

. Electric pickup charges battery in 13 minutes

. World’s largest organic solar cell film installation in Germany

. Meyer Burger cuts 100 more jobs in China focus

. 3D printing a lithium ion battery for wearables

. Work starts on hemp for supercapacitor

. Smartwatch uses photodiodes for both energy harvesting and gesture recognition

. Third generation SiC JFET adds 1200 V and 650 V options

. 11kW bi-directional SiC DC-DC converter targets energy storage systems

. Integrated magnetics cut power converter size in half

. Choosing blocking capacitors – it’s more than just values

. Mentor: Concepts of power integrity: Taking the noise out of via-to-via coupling

. Coilcraft: An introduction to inductor specifications

Qualcomm launches 60GHz 802.11ay chipset

By Nick Flaherty

Qualcomm Technologies has launched a family of 60GHz Wi-Fi chipsets, the QCA64x8 and QCA64x1, providing 10+ gigabit-per-second (Gbps) network speeds and wire-equivalent latency as well as sensing applications like proximity and presence detection, gesture recognition, room mapping with precise location and improved facial feature detection. 
Qualcomm Technologies is the first-to-market with a 60GHz Wi-Fi solution with optimizations based on the 802.11ay specification. Although the 60GHz has a limited range, the chips include always-on ambient Wi-Fi sensing capabilities, enabling devices to identify people, objects, movements and precise location without being affected by light conditions. Networking and mobile devices alike can take advantage of these new Wi-Fi sensing features to provide new and differentiated experiences to end users. 

“mmWave holds enormous potential to support a new class of user experiences, and Qualcomm Technologies is leading the charge with both its Qualcomm Snapdragon X50 5G NR modem family and unlicensed 60GHz Wi-Fi mmWave solution,” said Rahul Patel, senior vice president and general manager, connectivity and networking at Qualcomm Technologies. “Our 11ay solutions were developed with the flexibility to support a broad ecosystem of smartphone, router or fixed wireless access platforms and provides the industry with the critical building blocks needed to take connectivity performance to the next level.”

The QCA6438 and QCA6428 are aimed at infrastructure and fixed wireless access, and the QCA6421 and QCA6431 at mobile applications. Facebook’s Terragraph technology is using the QCA6438 and QCA6428 chipsets for a multimode wireless access point. 

“We are excited to work with Qualcomm Technologies to develop 60 GHz solutions based on Facebook’s Terragraph technology and Qualcomm Technologies’ chipsets,” said Anuj Madan, Product Manager at Facebook. “By enabling service providers to offer high-quality internet connectivity in dense urban and suburban areas, this collaboration supports our work to bring more people online to a faster internet.”
“As consumers all around the world are increasingly relying on mobile devices to power their gaming and entertainment activities, they expect seamless experiences powered by unrivalled speed and ultra-low-latency,” said Bryan Chang, General Manager of ASUS Mobile Business Unit. “Our latest line of Republic of Gamers (ROG) mobile devices are designed specifically to meet these high-performance mobile gaming needs while leveraging Qualcomm Technologies’ existing 60GHz Wi-Fi solutions. We are happy to see Qualcomm Technologies continue their innovation on 60GHz Wi-Fi technology.”

The QCA64x8 and QCA64x1 are available today and we will bring you details of the mobile chips when we have them.

Friday, October 19, 2018

Infineon backs FreeRTOS for IoT edge computing

By Nick Flaherty

We've highlighted the importance of FreeRTOS (now Amazon FreeRTOS) for real time processing and connecting devices in the Internet of Things to the cloud.

Infineon Technologies has combined its microcontroller and security tech for easy and secure use of new generation sensors featuring new AI functionalities running on Amazon Web Services (AWS).
“Infineon supports the development of secured cloud connection-enabled applications,” said Sandro Cerato, Chief Technology Officer of the Power Management & Multimarket Division at Infineon. “These can range from mere motion detection up to situational awareness, by leveraging AI and machine learning algorithms. We are combining leading-edge sensors, hardware-based security and Infineon microcontrollers, with the technology and services provided by AWS to support customers with the next level of smartness.”

To do this, Infineon’s XMC4000 family of 32-bit-microcontrollers now supports Amazon FreeRTOS, a microcontroller operating system that makes small, low-power edge devices easy to program, deploy, secure, connect, and manage. This will create multiple new options of edge-computing based applications for consumer and industrial markets are enabled.

Securely connecting manufacturers’ devices both locally and to the cloud is paramount for customers to take up the connected service offering. People living in smart homes and working in smart buildings can benefit from the seamless interaction of the new generation XENSIV sensors. Radar, pressure sensors and MEMS microphones are accompanied by OPTIGA hardware security solutions, allowing energy, light management, health care and building operation running on AWS to improve the quality of life and deliver substantial cost savings.

For example, Infineon’s XMC4800 family Connectivity kit WiFi runs on AWS. This development platform brings edge-computing services to the next level of interaction in customers’ applications, including WiFi connectivity and ETHERCAT. “Using the XMC4800 series, Infineon has the opportunity to be one of the first movers in the market offering AWS FreeRTOS combined with ETHERCAT functionality,” said Ralf Koedel, Director Product Marketing for Automotive & Industrial Microcontroller at Infineon.

XMC4800 devices are powered by Cortex ARM M4F microcontrollers. They also offer up to six standard CAN and ETHERCAT connectivity, for IoT gateway applications, plus many other peripherals with the benefit of the Arduino and Click Board compatible form factor. The combination with AWS Greengrass software allows developers to easily develop and take advantage of the cloud capability provided by AWS.

The Evaluation Board XMC4800 IoT Amazon FreeRTOS Connectivity kit WiFi is available.

Related stories:
  • Amazon and ST launch IoT Node-to-Cloud implementation for FreeRTOS

Thursday, October 18, 2018

LoRa gateway module boosts deployment

By Nick Flaherty

Murata has launched two highly integrated 14pin LoRa Pico Gateway metal-shielded modules to accelerate the deployment of long range IoT networks.
The LBAA0ZZ1QM (for the US) and LBAA0ZZ1TY (for the EU) support eight channels and are available for EU or US ISM bands. The module measures just 55.0 mm x 21.0 mm x 3.4 mm and Murata believes them to be the world’s smallest LoRaWAN gateway modules. 

The single substrate module uses a Semtech SX1308 transceiver concentrator capable of managing packets from many remotely dispersed end-points, two Semtech SX1257 highly integrated RF front end I/Q transceivers and an STMicroelectronics STM32F401 Arm Cortex M4 microcontroller. A Skyworks RF front-end multi-chip module provides antenna matching, receiver pre-amplifier and transmitter final stage function.

The microcontroller hosts packet forwarding, communication with the application host controller and the module’s power management functions. The packet forwarder handles the two-way communication of packets between an end-point and the network server while the host driver provisions a USB CDC virtual port to communication with the host gateway application processor. Alternatively, if desired, the module’s UART port can be used for communication with the gateway’s host. The microcontroller firmware also takes care of the power management, in particular when using a USB port, by limiting downlink power consumption to within the 500 mA maximum power budget.

The LoRa network provides a low cost long-range communication infrastructure to communication with thousands of end-points. Example deployments include utility meter reading, smart agriculture and industrial IoT applications. Ensuring reliable communication across metropolitan or rural areas is essential and gateways such as the Murata LBAA0ZZ1 play an essential part in maintaining network links.

The module has support from network operators such as Actility and the Things Network to speed up rollouts. 

“This module is a significant step, ready to accelerate the growth of LoRaWAN use cases requiring widespread deployment of picocells, such as smart building applications,” said Actility CEO Oliver Hersent. “We are working with Murata to ensure out of the box compatibility with our market-leading ThingPark IoT network management platform, so our customers can benefit from the most cost-effective picocell gateways. This will be particularly valuable for integrators of ThingPark Enterprise targetting in-building and in-factory solutions.”

Wienke Giezeman, founder and CEO of The Things Network, added "Murata understands the future of LoRaWAN very well by providing an easy way for any router, set top box and base station makers to offer LoRaWAN in their products in a very easy way. We are very happy to partner with them to bring the complete solution to the market."

“With its new LBAA0ZZ1 series LoRa Pico Gateway module, Murata is greatly simplifying the development of new LoRaWAN gateways, which is a crucial contribution to our ecosystem as LoRaWAN becomes widespread in the most diverse applications,” said Domenico Arpaia, CEO of OrbiWise. “We have long cooperated with the Murata Team: OrbiWAN, our Carrier-grade LoraWAN Network Server, which already supports all commercial LoRaWAN gateways, now also supports natively Murata’s new gateway module. We are confident that, with Murata’s competence and resources and our own help, their new gateway module will quickly become the solution of choice for many new gateways in the rapidly growing LoRa market.”

Wednesday, October 17, 2018

NXP launches machine learning tools for IoT edge processing

By Nick Flaherty

NXP has launched an edge intelligence environment called eIQ that provides a comprehensive machine learning (ML) toolkit with support for TensorFlow Lite, Caffe2, and other neural network frameworks, as well as non-neural ML algorithms.

This will enable turnkey integrated ML solutions for voice, vision and anomaly detection applications, including data acquisition, trained models, with user feature customisation to use with NXP's EdgeScale software that provides secure device on-boarding, provisioning, and container management of ML applications targeting i.MX and Layerscape applications processors.
The eIQ software environment includes the tools necessary to structure and optimise cloud-trained ML models to efficiently run in resource-constrained edge devices for a broad range of industrial, Internet-of-Things (IoT), and automotive applications. The turnkey, production-ready solutions are specifically targeted for voice, vision, and anomaly detection applications. By removing the heavy investment necessary to become ML experts, NXP enables tens of thousands of customers whose products need machine learning capability.

"Having long recognised that processing at the edge node is really the driver for customer adoption of machine learning, we created scalable ML solutions and eIQ tools, to make transferring artificial intelligence capabilities from the cloud-to-the-edge even more accessible and easy to use," said Geoff Lees, senior vice president and general manager of microcontrollers.

With support for NXP's full microcontroller (MCU) and applications processor product line, eIQ provides the building blocks that developers need to implement ML in edge devices. Keeping pace with ML's changing landscape, NXP eIQ is continuously expanding to include: data acquisition and curation tools; model conversion for a wide range of neural net (NN) frameworks and inference engines, such as, TensorFlow Lite, Caffe2, CNTK, and Arm® NN; support for emerging NN compilers like GLOW and XLA; classical ML algorithms (e.g. support vector machine and random forest); and tools to deploy the models for heterogeneous processing on NXP embedded processors.

NXP also recently introduced a software infrastructure called EdgeScale to unify how data is collected, curated, and processed at the edge, with focus on enabling ML applications. EdgeScale enables seamless integration to cloud-based artificial intelligence (AI) / ML services and deployment of cloud-trained models and inferencing engines on all NXP devices, from low-cost MCUs to high-performance i.MX and Layerscape applications processors.

Building on the eIQ environment, the company introduced turnkey solutions for edge-based learning and local execution of vision, voice, and anomaly detection models. These system-level solutions provide the hardware and software necessary for building fully functional applications, while allowing customers to add their own differentiation. The solutions are modular, making it easy for customers to expand functionality of their products with a simple plug-in. For example, a voice recognition module can be easily added to a product that has NXP's vision recognition solution. 

Demonstrations include facial recognition training on high-performance i.MX 8QM and deployment of extracted inference engines on mid-range i.MX 8QXP and i.MX 8M applications processors using secure docker containers, as well as CMSIS-NN performance benchmarking using CIFAR-10 on just-announced LPC5500 MCUs and anomaly detection with classical machine learning techniques using Cortex-M4F based Kinetis MCUs.
Localized voice and vision ML applications include voice-enabled solution for localised wake word and end-user programmable voice control experience also using i.MX RT1050 crossover processor and vision systems with theAu-Zone DeepView ML Kit using i.MX 8QM implemented in a microwave oven and traffic sign recognition using low-cost i.MX RT 1050 crossover processor.

Related stories:

Tuesday, October 16, 2018

SiFlower uses CEVA 802.11ac in Chinese Smart Home Access Point chip

By Nick Flaherty

Despite the move to rebranding WiFi generations, Chinese chip maker SiFlower Communication Technology is using the RivieraWaves RW-11AC Wi-Fi IP cost-effective access point system on chip (SoC).
SiFlower’s SF16A18 is a highly integrated single chip that combines the RW-11AC IP with a dual-core CPU and rich suite of interfaces (Ethernet, GMAC, USB, SD, IIS), creating an optimal platform for intelligent routers / access points, smart home gateways and smart speakers.

“Highly integrated and optimized Wi-Fi solutions are the key to opening the mass Smart Home market,” said Albert Lee, CEO of SiFlower. “We are proud of the solution we have created in the SF16A18, and CEVA’s RW-11AC Wi-Fi IP along with their engineering excellence and technical support have been instrumental in our success.”

“We are delighted to announce SiFlower as a licensee for our Wi-Fi IP,” said Aviv Malinovitch, vice president and general manager of the Connectivity Business Unit at CEVA. “The SF16A18 is a leading example of the next-generation of fully-integrated and differentiated Wi-Fi enabled SoCs that are available at a very cost-effective price point.”

CEVA’s RivieraWaves Wi-Fi IP family offers a comprehensive suite of platforms for embedding Wi-Fi 802.11a/b/g/n/ac/ax into SoCs/ASSPs. Optimized implementations are available targeting a broad range of connected devices, including smartphones, wearables, consumer electronics, smart home, industrial and automotive applications. CEVA also offers RISC-V based fully integrated platforms. 

Shanghai SiFlower Communication Technology was formally incorporated in 2014 and is headquartered in Zhangjiang Hi-Tech Park, Pudong, Shanghai, developing deciecs for the Internet of Things (IoT). 

Monday, October 15, 2018

Power news this week

From eeNews Europe by Nick Flaherty

. Dialog cashes in on Apple relationship

. CellCube spins out its vanadium mines

. Ceres to build £7m solid oxide fuel cell plant


. Printable thermoelectric energy capture for IoT systems

. Stability boost for high efficiency perovskite solar cells

. Silicon solar cell breakthrough tops efficiency limit

. Cabot teams with SAFT on low-cobalt cathodes for lithium-ion batteries


. Battery management system supports ASIL-C safety spec

. GaN thin film transistors for flexible substrates

. National Instruments: Key considerations for powertrain HIL test

. UnitedSiC: Practical considerations when comparing SiC and GaN in power applications

Samsung first to 5G commercial call

By Nick Flaherty

SK Telecom has this week made the first commercial 5G call, a key milestone for the industry, using equipment from Samsung Electronics.

The jointly-developed 3GPP 5G Non-standalone (NSA) new radio (NR) standard and commercial 5G NR equipment was part of the SKT's 5G testbed located in its Bundang office building.

The first 5G NSA-NR calls used a 100MHz bandwidth in the 3.5GHz band on the 5G NR radio, along with 4G LTE radio and NSA core.

In NSA-NR architecture, 5G is supported by the infrastructure of legacy 4G LTE where mobile devices are connected to both 4G and 5G for data traffic, while using the 4G network for non-data traffic such as exchanging signals for mobility controls. This approach has been considered as one of the promising 5G architectures for the initial 5G deployments

Saturday, October 13, 2018

NXP pushes security in move from M3 to M33 microcontroller cores

By Nick Flaherty

NXP Semiconductors is pushing the embedded security requirements of IoT edge devices and cloud to edge connections with two new multi-core microcontrollers based around the Arm Cortex M33 core.

NXP is emphasising its multi-layered, hardware-enabled protection scheme that protects embedded systems with secure boot for hardware-based immutable root-of-trust, certificate-based secure debug authentication and encrypted on-chip firmware storage with real-time, latency-free decryption.

These are used alongside Arm TrustZone for Armv8-M and Memory Protection Unit (MPU) to ensure physical and runtime protection with hardware-based, memory mapped isolation for privilege-based access to resources and data. 
“The promise of the connected world through the Internet-of-Things is extraordinary,” said Geoff Lees, senior vice president and general manager of microcontrollers at NXP. “Through NXP’s in-depth security and processing expertise, software ecosystem and breadth of portfolio, we are uniquely positioned to bring innovative and accessible advancements in IoT security to all developers.”

The key to this is a ROM-based secure boot process that uses device-unique keys to create an immutable hardware ‘root-of-trust’. The keys can now be locally generated on-demand by an SRAM-based Physically Unclonable Function (PUF) that uses natural variations intrinsic to the SRAM bitcells. This permits closed loop transactions between the end-user and the original equipment manufacturer (OEM), thus allowing the elimination of third-party key handling in potentially insecure environments. Optionally, keys can be injected through a traditional fuse-based methodology.

NXP is also working with Dover Microsystems to introduce Dover’s CoreGuard technology in future platforms. This is a hardware-based active defense security IP that instantly blocks instructions that violate pre-established security rules, enabling embedded processors to defend themselves against software vulnerabilities and network-based attacks.

The security environment improves the symmetric and asymmetric cryptography for edge-to-edge, and cloud-to-edge communication by generating device-unique secret keys through innovative usage of the SRAM PUF. The security for public key infrastructure (PKI) or asymmetric encryption is enhanced through the Device Identity Composition Engine (DICE) security standard as defined by the Trusted Computing Group (TCG). SRAM PUF ensures confidentiality of the Unique Device Secret (UDS) as required by DICE. The newly announced solutions support acceleration for asymmetric cryptography (RSA 1024 to 4096-bit lengths, ECC), plus up to 256-bit symmetric encryption and hashing (AES-256 and SHA2-256) with mbedTLS optimized library.

“Maintaining the explosive growth of connected devices requires increased user trust in those devices,” said John Ronco, vice president and general manager, Embedded & Automotive Line of Business, Arm. “NXP’s commitment to securing connected devices is evident in its new Cortex-M33 based products built on the proven secure foundation of TrustZone technology, while incorporating design principles from Arm’s Platform Security Architecture (PSA) and pushing the boundaries of Cortex-M performance efficiency.”

NXP strategically chose the Cortex-M33 core for its first full-feature implementation of the Armv8-M architecture to provide security platform benefits and substantial performance improvements compared to existing Cortex-M3/M0 MCUs (over 15 to 65 percent improvement, respectively). One of the key features of the Cortex-M33 is the dedicated co-processor interface that extends the processing capability of the CPU by allowing efficient integration of tightly-coupled co-processors while maintaining full ecosystem and toolchain compatibility. 

NXP has used this capability to implement a co-processor for accelerating key ML and DSP functions, such as, convolution, correlation, matrix operations, transfer functions, and filtering; enhancing performance by as much as 10x compared to executing on Cortex-M33. The co-processor further leverages the popular CMSIS-DSP library calls (API) to simplify customer code portability.

The LPC5500 devices provide single and dual core Cortex-M33 in a 40nm process with integrated DC-DC that delivers industry-leading performance at a fraction of power budget, up to 90 CoreMarks/mA. The high density of on-chip memory, up to 640KB flash and 320KB SRAM, enables efficient execution of complex edge applications. Further, NXP’s autonomous, programmable logic unit for offloading and execution of user-defined tasks delivers enhanced real-time parallelism. 

The i.MX RT600 crossover Platform is aimed at real time machine learning and artificial intelligence by adding a 600MHz Cadence Tensilica HiFi 4 DSP and shared on-chip SRAM of up to 4.5MB to a 300MHz M33 with a wide operating voltage. The ML performance is further enhanced in the DSP with 4x 32-bit MACs, vector FPU, 256-bit wide access bus, and DSP extensions for special Activation Functions (e.g., Sigmoid transfer function). 

Related stories:

Thursday, October 11, 2018

5G Toolbox for MATLAB

By Nick Flaherty

MathWorks has launched a toolbox for its Matlab tool that standards compliant waveforms and reference examples for modeling, simulation, and verification of the physical layer of 3GPP 5G New Radio (NR) communications systems. 

This will allow engineers using 5G Toolbox can quickly design critical algorithms and predict end-to-end link performance of systems that conform to the 5G Release 15 standard specification, starting the move to commercial system rollout in 2019. They can now use the toolbox for link-level simulation, golden reference verification, conformance testing, and test waveform generation – without starting from scratch.

The 5G Toolbox joins other toolboxes for LTE and WLAN standards, simulation of massive MIMO antenna arrays and RF front end technologies, over-the-air testing, and rapid prototyping of radio hardware.

“When adopting 5G, wireless engineers need to verify that their product designs can conform or co-exist with a new, complex standard that will continue to evolve. Very few companies have adequate resources or in-house expertise to understand and implement a 5G-compliant design,” said Ken Karnofsky, senior strategist for signal processing applications, MathWorks. “Having seen how LTE Toolbox has helped teams quickly deploy pre-5G designs in radio test beds, we anticipate 5G Toolbox will have a similar impact for the mainstream wireless market.”

5G Toolbox is the foundation of a design workflow that helps wireless teams rapidly develop, prototype, and test designs. Companies with siloed tools for RF, antenna, and baseband design; limited experience with MIMO technologies; or that lack automation from simulation to prototyping can now rely on MATLAB as a common environment for simulation, over-the-air-testing, and rapid prototyping.

MATLAB has also been used for 5G standards development by serving as a common research & development environment for multiple companies involved in the 3GPP working groups.

There's also a Q&A with Convida Wireless, a joint venture between Sony Corporation of America and InterDigital that focuses on research into the future of wireless connectivity technology at
Related stories:

Tuesday, October 09, 2018

Sprayable, transparent antennas for the IoT

By Nick Flaherty

Researchers at Drexel University’s College of Engineering have developed a technique for spraying invisibly thin antennas, made from a type of two-dimensional, metallic material called MXene, that perform as well as those being used in mobile devices, wireless routers and portable transducers.

“This is a very exciting finding because there is a lot of potential for this type of technology,” said Kapil Dandekar, PhD, a professor of Electrical and Computer Engineering in the College of Engineering, who directs the Drexel Wireless Systems Lab, and was a co-author of the research. “The ability to spray an antenna on a flexible substrate or make it optically transparent means that we could have a lot of new places to set up networks — there are new applications and new ways of collecting data that we can’t even imagine at the moment.”

Spray-applied MXene antennas could open the door for new applications in smart technology, wearables and IoT devices.

MXene titanium carbide can be dissolved in water to create an ink or paint and the high conductivity allows the printed structures to transmit and direct radio waves.

“We found that even transparent antennas with thicknesses of tens of nanometers were able to communicate efficiently,” said Asia Sarycheva, a doctoral candidate in the A.J. Drexel Nanomaterials Institute and Materials Science and Engineering Department. “By increasing the thickness up to 8 microns, the performance of MXene antenna achieved 98 percent of its predicted maximum value.”

“This technology could enable the truly seamless integration of antennas with everyday objects which will be critical for the emerging Internet of Things,” said Dandekar. “Researchers have done a lot of work with non-traditional materials trying to figure out where manufacturing technology meets system needs, but this technology could make it a lot easier to answer some of the difficult questions we’ve been working on for years.”

Initial testing of the sprayed antennas suggest that they can perform with the same range of quality as current antennas, which are made from familiar metals, like gold, silver, copper and aluminum, but are much thicker than MXene antennas. Making antennas smaller and lighter has long been a goal of materials scientists and electrical engineers, so this discovery is a major step in reducing their footprint as well as broadening their application.

“Current fabrication methods of metals cannot make antennas thin enough and applicable to any surface, in spite of decades of research and development to improve the performance of metal antennas,” said Yury Gogotsi, professor of Materials Science and Engineering in the College of Engineering, and Director of the A.J. Drexel Nanomaterials Institute, who initiated and led the project. “We were looking for two-dimensional nanomaterials, which have sheet thickness about hundred thousand times thinner than a human hair; just a few atoms across, and can self-assemble into conductive films upon deposition on any surface. Therefore, we selected MXene, which is a two-dimensional titanium carbide material, that is stronger than metals and is metallically conductive, as a candidate for ultra-thin antennas.”

Drexel researchers discovered the family of MXene materials in 2011 and have been gaining an understanding of their properties, and considering their possible applications, ever since. The layered two-dimensional material, which is made by wet chemical processing, has already shown potential in energy storage devices, electromagnetic shielding, water filtration, chemical sensing, structural reinforcement and gas separation.

“The MXene antenna not only outperformed the macro and micro world of metal antennas, we went beyond the performance of available nanomaterial antennas, while keeping the antenna thickness very low,” said Babak Anasori, a research assistant professor in A.J. Drexel Nanomaterials Institute. “The thinnest antenna was as thin as 62nm — about thousand times thinner than a sheep of paper — and it was almost transparent. Unlike other nanomaterials fabrication methods, that requires additives, called binders, and extra steps of heating to sinter the nanoparticles together, we made antennas in a single step by airbrush spraying our water-based MXene ink.”

The group initially tested the spray-on application of the antenna ink on a rough substrate — cellulose paper — and a smooth one — polyethylene terephthalate sheets — the next step for their work will be looking at the best ways to apply it to a wide variety of surfaces from glass to yarn and skin.

“Further research on using materials from the MXene family in wireless communication may enable fully transparent electronics and greatly improved wearable devices that will support the active lifestyles we are living,” said Anasori.

Rugged AI system for hostile environments

By Nick Flaherty

General Micro Systems (GMS) has launched a rugged, conduction-cooled, commercial off-the-shelf (COTS) deep learning/artificial intelligence (AI) mobile system that offers real-time data analysis and decision in hostile environments.

The X422 “Lighting” integrates two Nvidia V100 Tesla data centre accelerators into a fully sealed, conduction-cooled chassis. It is designed as a dual co-processor companion to GMS Intel Xeon rugged air-cooled or conduction-cooled servers

GMC claims this is an industry first for deep learning and artificial intelligence, as the X422 includes no fans or moving parts, promising wide temperature operation and massive data movement via an external PCI Express fabric in ground vehicles, tactical command posts, UAV/UAS, or other remote locations. It uses the company’s patented RuggedCool thermal technology to adapt the GPGPUs for harsh conditions, extending the temperature operation while increasing environmental MTBF.

“No one besides GMS has done this before because we own the technology that makes it possible. The X422 not only keeps the V100s or other 250 W GPGPU cards cool on the battlefield, but our unique x16 PCIe Gen 3 FlexVPX fabric streams real-time data between the X422 and host processor/server at an astounding 32 GB/s all day long,” said Ben Sharfi, chief architect and CEO, General Micro Systems. “From sensor to deep learning co-processor to host: X422 accelerates the fastest and most complete data analysis and decision making possible.”

The X422, which is approximately 12x12 inches square and under 3 inches high, includes dual x16 PCIe Gen 3 slots for the GMS-ruggedized PCIe deep learning cards. Each card has 5120 CUDA processing cores, giving X422 over 10,200 GPGPU cores and in excess of 225 TFLOPS for deep learning. In addition to using Nvidia GPGPU co-processors, the X422 can accommodate other co-processors, different deep learning cards, and high-performance computers (HPC) based upon FPGAs from Xilinx or Altera, or ASICs up to a total of 250 W per slot (500 W total).
Another industry first brings I/O to X422 via GMS’s FlexVPX bus extension fabric. X422 interfaces with servers and modules from GMS and from One Stop Systems, using industry-standard iPass+ HD connectors offering x16 lanes in and x16 lanes out of PCI Express Gen 3 (8 GT/s) fabric for a total of 256 GT/s (about 32 GB/s) system throughput. X422 deep learning co-processor systems can be daisy-chained up to a theoretical limit of 16 chassis working together.

Unique to X422 are the pair of X422’s two PCIe deep learning cards that can operate independently or work together as a high-performance computer (HPC) using the user-programmable onboard, non-blocking low-latency PCIe switch fabric. For PCIe cards with outputs—such as the Titan V’s DisplayPorts—these are routed to separate A and B front panel connectors.

Monday, October 08, 2018

Power news this week

By Nick Flaherty

. TDK breaks ground on £1m UK EMC centre

. Reducing the power consumption of neuromorphic AI systems

. Solid state polymer readies for the big time

. Ilika ships first samples of millimetre solid state battery for medical designs

. Zinc air battery maker to start mass production.

. Self-powered acoustic sensor boosts AI recognition

. Flexible batteries open up wearable designs

. Solar cell combined with redox cell creates solar flow battery


. Ultra-slim resistive heaters in standard or custom configurations

. GE launches next generation onshore wind turbine

. Power meter for lasers handles 125,000 samples per second


. National Instruments: Key considerations for powertrain HIL test

. Coilcraft: An introduction to inductor specifications

Top ten trends for tech in 2019

By Nick Flaherty

Trendforce has identified ten key theme for tech in the coming year, from 3D memory chips and the 5G rollout to smart grids and smart speakers.


The memory industry will show acceleration and evolution, driven by next-generation products and advanced die-stacking technology. Manufacturers have applied through-silicon via (TSV) techniques for chip stacking, and launched High Bandwidth Memory (HBM) in order to increase the throughput within a single package, overcoming the limits of bandwidth. 

In addition to saving space in the package, next-generation products are also intended to meet the demand from edge computing applications, which require shorter reaction time and different structures. Compared with existing DRAM products, next-generation solutions may fit into different architectures, for example, in the embedded systems, memory products are closer to CPU. The solutions may also offer significant performance improvements, such as power saving resulted from the non-volatility of memory.

Commercial 5G rollout
Deployment of optical communication infrastructure, the foundation of 5G network, has been ongoing in the past years. The 5G architecture has been gradually expanded from backbone network to metropolitan area network and access network, providing 5G-compatible bandwidth at a low cost. Commercialization of 5G is expected to roll out in 2019, with 5G telecom services to be launched in United States, South Korea, Japan, and China. Smartphones and other devices supporting 5G technology are also expected to be available some time in 2019. 

The major advances in 5G mobile wireless solutions will support higher bandwidth and faster connection, enriching the telecom ecosystems. The arrival of 5G may also generate demand for a wider range of technology-based services, including high resolution (4K/8K) of video, mobile AR/VR gaming and immersive multimedia applications, industrial automation, telesurgery robots, massive Internet of Things, and automated control of vehicles, etc.
Foldable and 5G smartphones 

The global smartphone market expects specs upgrades in the coming year, with more models featuring all-screen design, narrower borders, a smaller notch, and subsequent smaller front-facing camera modules to fit the smaller notch. Moreover, the smartphone market will expect increasing penetration of biometric recognition, with triple cameras emerging. The advanced solutions for flexible AMOLED may allow brands to launch single-screen foldable smartphones next year, while 5G smartphones are also expected to be launched in 2019. Brands are now planning for the 5G-fueled market, expecting commercial 5G to open up more opportunities and growth momentum. Offerings combining 5G and foldable screen may also see great potential and would make a splash in the smartphone market.

Under-display fingerprint sensors

With increasing demand from smartphone vendors, design houses have invested heavily to increase the yield rates of fingerprint sensors and seek more cost-effective solutions. Optical under-display fingerprint sensors are normally found in flagship smartphones currently, but are expected to be embedded in mid and high-end Android models in the coming year. Ultrasonic in-display fingerprint sensors also have a chance to be adopted by Android phone vendors. 

TrendForce estimates that ultrasonic and optical under-display fingerprint sensors will account for 13% of all fingerprint sensor technologies in 2019, significantly up from 3% in 2018. Particularly, some design houses also consider moving the fingerprint sensors to the very edge of the screen, trying to break through the bottlenecks of technology and yield rates. The edge solutions may change the landscape of industry if they manage to hit the market in the next few years. However, the target market and acceptance of products remain to be seen.

MiniLEDs for consumer displays 

With advantages in brightness and contrast, Mini LED has a chance to compete with OLED in segments of cinema display and home theater display. Compared with self-emitting RGB LED digital display (video wall), Mini LED-based backlight units use blue LED as the basic light source, making it more cost-effective, so Mini LED has higher potential to make its way into the blue ocean of displays of consumer electronics, including smartphone, tablets, desktops, TVs, and automotive displays. The next two years (2019-2020) will likely see acceleration in the development of Mini LED, which may record a market value of US$1699 million in 2022.

Smart speakers

Smart speakers remain in the spotlight of the market in 2018, drawing attention to applications enabling voice interaction, such as vehicles, smart TVs, and smart headsets. Looking forward to 2019, increasing number of applications will enable voice interaction, including virtual assistant, voice recognition, voice shopping, etc. Companies will continue to develop new voice-activated applications to explore the market potential.

eSIMs for smartwatches

Driven by major companies like Apple, Huawei, and Qualcomm, eSIM has been introduced into the market, making smart devices more independent from smartphones and driving the growth of smartwatches at a pace faster than smart bracelets. The eSIM has been already embedded in smartwatches and always connected PCs to allow Internet connection, and more applications will adopt eSIM in 2019 to enable functions like streaming music, making calls, sending messages, and virtual assistant, etc.

More competition in the IoT

This year has witnessed a rapid growth of the Internet of Things (IoT) solutions, with the commercial use of low power wide area networks (LPWAN) becoming popular worldwide. Edge computing and AI have been integrated to IoT architectures, enabling transformation in vertical industries. With technology movements and global deployments in the following years, IoT is expected to come strong in operation of enterprises, making 2019 a key year for them to evaluate the outcome of adopting IoT in their businesses. Meanwhile, however, the competition is also intensified, thus enterprises are poised to consider whether investments in IoT operation are cost effective enough to enhance their profitability in the longer run.

New healthcare tech

Following the 21st Century Cures Act, the U.S. Food and Drug Administration (FDA) announces new regulations on next-generation sequencing (NGS) and digital medicine this year. The new technologies may change the landscape of biotechnology industry and attach increasing importance to software in the healthcare industry. Particularly, digital therapeutics has evolved, integrating ICT, software and drugs in treatment. Moreover, surgical robots and surgical navigation systems (SNS) are expected to integrate various medical imaging technologies including hybrid imaging, molecular imaging, AR and MR to optimize minimally invasive surgery. In terms of gene sequencing, NGS has been adopted in clinical application. FDA has been building genetic databases ClinVar and ClinGen, as well as PrecisionFDA that verifies the algorithms of sequencing. Gene sequencing data analysis and gene variants application will become the focus of the future NGS clinical market. In conjunction with new treatments, NGS will help achieve precision medicine in the future.

Smart grid, energy management and energy storage system to be keys of global photovoltaic market
This year witnessed constant expansion of installed PV supply capacity in China, but the policy updates by the Chinese government have led to oversupply in the market, resulting in global module price declines and cost pressures in the supply chain. Therefore, the global PV industry may expect a changing landscape in the upcoming year. On the other hand, the levelized cost of electricity (LCOE) of PV projects have seen significant decline, and grid-parity has been observed in the global PV market. With the constant decrease in PV feed-in tariff (FiT), smart grid, energy management and energy storage systems will become the key considerations for PV system developers. They will need to achieve effective energy storage to balance loads of power plants and stabilize the overall power quality in connection with grids and power plants.

Wind River shows edge computing for 5G on BT basestation

By Nick Flaherty

Wind River has been working with UK telecoms provider BT on edge cloud computing applications at the basestation rather than the data centre. 

It has developed a proof of concept platform with an edge cloud compute node using the Titanium Cloud virtualization platform running on a BT cellular basestation, with the local traffic offload capability coming from Athonet, a software-based mobile core provider. 

5G applications will require locating compute power and capacity close to where the traffic originates, whereas application logic has traditionally resided in the data centre. However, 5G applications such as those for autonomous driving or Industrial IoT, where physical controls require extremely low latency, will demand diverse network locations for their logic. In these cases, computing will often need to happen much closer to the end device.

“5G will demand ultra-low latency and dynamic compute architectures for the cloud,” said Charlie Ashton, senior director of business development for Telecommunications at Wind River. “Wind River provides a flexible and secure cloud-based infrastructure that can be deployed at any network location. In order to successfully meet changing market needs, it is important to work with leading operators who, like BT, are uniquely positioned to deploy cloud compute at the right edge locations to support growing 5G applications.”

“The rise of Edge Cloud Compute will require flexible cloud infrastructure and the deployment of dynamic applications wherever and whenever they are needed. BT’s network is evolving to meet these demands,” said Maria Cuevas, head of mobile core networks research at BT. “BT is working with industry partners like Wind River to tackle the technical challenges around Edge Cloud Compute and develop solutions that meet customers’ future needs.”

The proof of concept highlights multiple 5G edge cloud computing use cases, including those for next-generation connected automobiles and also for augmented/virtual reality:
• Remote vehicle control for traffic/route management
• Vehicle-to-vehicle and vehicle-to-infrastructure communication for collision avoidance
• Augmented reality for multi-person sessions without gameplay disruption

Friday, October 05, 2018

Alliance dumbs down Wi-Fi names

By Nick Flaherty

The WiFi Alliance is changing the way WiFi is labelled to make it less confusing for consumers and it says to enable users to easily differentiate between technologies.

While the move to Wi-Fi 6 as the name for 802.11ax will help the upgrade cycle, it risks minimising the impact of other technologies that embedded engineers rely on.

This is vitally important, as the Alliance points out that Wi-Fi carries more than half of the internet’s traffic in an ever-expanding variety of applications that billions of people rely on every day.

The generational terminology may also be used to designate previous Wi-Fi generations, such as 802.11n or 802.11ac, but doesn't necessarily take into account variations on the standard such as 11ad and the emerging 802.11ay for example, With 5G cellular moving to protocols that operate across a wider range of frequencies, Wi-Fi is already doing the same. Wi-Fi that operates in the millimetre band is very different from Wi-Fi 6 at 2.4GHz and 5GHz and potentially different again from Wi-Fi 6 at 7GHz.

The new naming system identifies Wi-Fi generations by a numerical sequence and can be used by product vendors to identify the latest Wi-Fi technology a device supports, by OS vendors to identify the generation of Wi-Fi connection between a device and network, and by service providers to identify the capabilities of a Wi-Fi network to their customers: 

Wi-Fi 6 to identify devices that support 802.11ax technology
Wi-Fi 5 to identify devices that support 802.11ac technology
Wi-Fi 4 to identify devices that support 802.11n technology

“For nearly two decades, Wi-Fi users have had to sort through technical naming conventions to determine if their devices support the latest Wi-Fi,” said Edgar Figueroa, president and CEO of Wi-Fi Alliance. “Wi-Fi Alliance is excited to introduce Wi-Fi 6, and present a new naming scheme to help industry and Wi-Fi users easily understand the Wi-Fi generation supported by their device or connection.”

This is a natural consequence of running out of letters at 802.11az. So almost certainly we will see more brand diversification with Wi-Fi 7 Max or Plus and we are effectively back to the days of Wi-Fi vs W-Gig.

In addition to describing the capabilities of the device, device manufacturers or OS vendors can incorporate the generational terminology in User Interface (UI) visuals to indicate the current type of Wi-Fi connection. The UI visual will adjust as a device moves between Wi-Fi networks so users have real-time awareness of their device connection. Certification programs based on major IEEE 802.11 releases will use a generational Wi-Fi name; Wi-Fi CERTIFIED 6 certification is coming in 2019.

Since 2000, the Alliance has completed more than 40,000 certifications to demonstrate interoperability, backward compatibility and the highest industry-standard security protection.

Naturally there's plenty of industry support for Wi-Fi 6 from consumer-facing organisations.

“Aerohive enthusiastically supports Wi-Fi Alliance’s new consumer-friendly Wi-Fi 6 naming convention in support of the emergence of IEEE’s new 802.11ax technology. Wi-Fi Alliance is now providing consumers the same type of generational Wi-Fi naming conventions to match what cellular technology has done since the beginning. Wi-Fi technology has evolved and improved over the last 21 years – from only a few megabits to several Gigabit speeds – yet this information is currently not provided. With Wi-Fi 6, consumers can easily identify the level of Wi-Fi provided and demand superior services. Additionally, we look forward to Wi-Fi Alliance’s launch of their Wi-Fi CERTIFIED 6 certification program next year, and will submit our latest generation of Aerohive devices for certification at the first opportunity.” said Perry Correll, product management director, Aerohive Networks

“Wi-Fi has evolved significantly since Aruba was founded 16 years ago – from its initial role as a secondary network within the enterprise enabling mobility to the mission-critical role it plays today as the primary connectivity method for billions of devices, users, and things. We applaud this effort by Wi-Fi Alliance to simplify the terminology used to differentiate between the different generations of technologies as it will help users more quickly and easily discern the technology their particular device or network supports,” added Lissa Hollinger, Vice President of Portfolio Marketing for Aruba, a Hewlett Packard Enterprise company

But there's also support from the IP and chip companies. 

“Consumers love Wi-Fi – nearly every Internet connected device has it and over 80% of all wireless traffic goes over it. The sixth generation of Wi-Fi - 802.11ax - is the most advanced ever, bringing faster speeds, greater capacity and coverage, and will make the user experience even more enjoyable. This simple, generational representation will let consumers differentiate phones and wireless routers based on their Wi-Fi capabilities, helping them pick the device that suits their needs best. When they see that their device contains Wi-Fi 6, they will know that they have the best wireless connectivity on the market,” said Vijay Nagarajan, senior director of marketing for Wireless Communications and Connectivity at chip designer Broadcom
“CEVA welcomes the introduction of the clear terminology. We have been licensing MAC and Modem IP for many years and across many generations of the technology spanning 802.11a/b/g/n/ac/ax. The new naming structure gives a simple and consistent framework to boost user awareness, which is especially important now at the dawn of Wi-Fi 6,” said Aviv Malinovitch, GM of the Connectivity BU at IP supplier CEVA.

Similarly Intel and Marvell have supported the move, although both keep referencing 802.11ax alongside Wi-Fi 6, highlighting the need for engineers to keep using the IEEE 802.11 names to be truly informed.