All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Thursday, March 29, 2018

Software package brings Amazon Alexa technology to simple connected objects

By Nick Flaherty

STMicroelectronics has developed a software package that enables Amazon’s Alexa Voice Service (AVS) to run on STM32 microcontrollers, allowing simple connected objects such as smart appliances, home-automation devices, and office products to support conversational user interfaces.

As an expansion package for the STM32Cube software platform, X-CUBE-AVS contains ready-to-use libraries and open routines that accelerate porting the AVS SDK (Software Development Kit) to the microcontroller. With application samples also included, it abstracts developers from the complex software layers needed to host AVS on an embedded device. Being the first such package to cater specifically for microcontrollers, whereas AVS development usually targets more power-hungry and expensive microprocessors, X-CUBE-AVS makes Alexa technology accessible to a wider spectrum of developers and projects.

The software handles low-layer communication and connection to AVS servers, provides application-specific services, and encapsulates the AVS protocol to ease application implementation. Connection management includes a persistent-token mechanism for directly restoring connection losses without repeated user authentication. A software test harness is provided for endurance testing, which can simulate events such as network disconnection to facilitate robustness testing and validation of the user application.

X-CUBE-AVS comes with a demonstration example for the STM32F769 Discovery Kit (order code: 32F769IDISCOVERY), which shows how to connect a simple smart-speaker to AVS, leveraging the board-configuration interface included in the software. X-CUBE-AVS can be used with other STM32F7 microcontrollers, or any STM32 device with adequate CPU performance and memory to run the AVS SDK.

X-CUBE-AVS is available now to download, free of charge, from

Sony launches its first USB3 industrial vision module

By Nick Flaherty

Sony Europe's Image Sensing Solutions has launched its first industrial vision module to use the USB3.0 standard.

The GS CMOS module, which is available in both colour and black and white variants, has a 1.6 MP resolution (1456 x 1088 pixels) and transmits data at over 100 frames per second.

The XCU-CG160 has been designed to give a simple migration path from CCD to GS CMOS, allowing the switch without system upgrades or a changed architecture.

At its heart is the 1/3-type Sony Pregius IMX273 sensor, a replacement for the well established ICX445 CCD sensor and gives significant improvements in sensitivity, dynamic range, noise reduction and frame rate capability.

The modules have been designed to lead the market in terms of image quality, and are targeted at a wide array of industrial vision and non-manufacturing markets - from print, robotics and inspection to medical, logistics and general imaging.

Key image-processing features included on the device include area gain and defect pixel correction. Shading correction has also been implemented.

The module has a minimum illumination of just 0.5 lx, the colour module requires just 12 lx and comes with a manual, auto and one-push white balance setting. Both modules have a sensitivity of F5.6, a gain of 0 to +18 dB, a shutter speed of 60 s to 1/10,000 s.
"For those who have overall responsibility for machine vision systems, the migration path from analogue to digital is front of mind. The XCU-CG160 makes this an easy process with the added advantage of superb performance," said Matt Swinney at Sony.

The C-mount module measures 29 x 29 x 42 mm, weighs 52g, and has an operating temperature of -5oC to +45oC. It meets UL60950-1, FCC Class A, CSA C22.2-No.60950-1, IC Class A Digital Device, CE : EN61326 (Class A), AS EMC: EN61326-1, VCCI Class A and KCC regulations.

Thursday, March 22, 2018

Rambus teams with IBM on hybrid memory for data centres

By Nick Flaherty

Rambus Labs and IBM are working together to optimise the use of DRAM and emerging memories to create a high-capacity memory subsystem that delivers comparable performance to DRAM alone. As part of the collaboration, Rambus will develop a flexible prototype hybrid memory platform using the OpenCAPI interface to demonstrate performance of multiple memory types in real-world server applications.

The hybrid memory system architectures will combine standard DRAM using other technologies such as Flash, enhanced Flash, Phase Change Memory (PCM), Resistive RAM (ReRAM) and Spin Torque Transfer Magnetic RAM (STT-MRAM) to create high capacity memories at lower cost per bit, with performance levels comparable to that of DRAM.

“The exploding volume of data and rapidly evolving workloads for Big Data applications are placing tremendous pressure on data centre memory systems for increased performance and capacity,” said Laura Stark, senior vice president and general manager of the Emerging Solutions Division at Rambus. “This project with IBM demonstrates our ongoing collaboration with the industry to accelerate the development and adoption of advanced memory solutions.”

Rambus will use IBM’s POWER9 processor and its OpenCAPI high performance interface to build a Hybrid Memory and development subsystem prototype. To move forward on this project, Rambus announced it has joined the OpenCAPI Consortium, an open development community based on Coherent Accelerator Processor Interface technology, and OpenPOWER Foundation, an open development community based on the IBM Power Architecture.

“IBM is excited to collaborate with Rambus regarding advanced memory technologies on the OpenCAPI interface of POWER9 systems,” said Steve Fields, IBM fellow and chief engineer of Power Systems. “IBM believes in transforming the architecture of server memory to allow open innovation and to fully exploit the diversity of memory technologies that will emerge in the coming years. This project leverages the new architecture to combine the best attributes of multiple types of media to achieve new levels of system cost/performance for memory-intensive cloud deployments and AI applications.”

First optical AI system built

By Nick Flaherty

Optalysys in Yorkshire, UK, has successfully built the world’s first implementation of a Convolutional Neural Network using their Optical Processing Technology.

Convolutional Neural Networks, or CNNs, are used for image and pattern recognition in applications such as Autonomous Vehicles, Weather Forecasting and Medical Image Analysis. These models are computationally extremely intensive, particularly for complex models, where there can be many convolutional “layers” to process.

While GPUs offer considerable advantages over conventional processors, they are limited by the breakdown of Moore’s Law and energy usage.

Optalysys’s optical processing technology is a fundamentally different approach using energy efficient laser light rather than silicon as the processing medium. This delivers speed improvements of several orders of magnitude over conventional computing at a fraction of the energy consumption.

“This is a hugely significant leap forward for the field of AI and clearly demonstrates the global potential for our Enabling Technology.” said Dr. Nick New, founder and CEO of Optalysys. “Optalysys has for the first time ever, applied optical processing to the highly complex and computationally demanding area of CNNs with initial accuracy rates of over 70%. Through our uniquely scalable and highly efficient optical approach, we are developing models that will offer whole new levels of capability, not only cloud-based but also opening up the extraordinary potential of CNNs to mobile systems.”

The demonstration successfully shows the Optalysys Optical Processing Technology processing a CNN, using the popular MNIST data set of hand-drawn numerals, which contains 60,000 training characters and 10,000 testing characters.

The Optical Computing Platforms uses a coprocessor based on an established diffractive optical approach that uses the photons of low-power laser light instead of conventional electricity and its elctrons. This inherently parallel technology is highly scalable for CNNs architectures.

Wednesday, March 21, 2018

Small form factor PC for IoT edge computing

By Nick Flaherty

Dutch hardware maker Logic Supply ( has launched an Ultra Small Form Factor fanless PC for edge computing in the Internet of Things (IoT).

The cl200 has a durable cast aluminum enclosure, and is configurable with Wi-Fi, Bluetooth and 4G connectivity.

"IoT and Edge projects require flexibility, connectivity and dependability," said Logic Supply Director of Engineering Michael Kleiner. "The CL200 is our smallest fanless system ever, and represents the next generation of IoT computing by combining connection flexibility and efficient performance in an affordable, highly-reliable platform."

Both models in the CL200 series are powered by an efficient Intel Apollo Lake Celeron processor and measure a mere 83mm x 116mm x 34mm. The base model CL200, running on Linux Ubuntu 16.04, features 1 GB of RAM and 8 GB of onboard storage. The CL210 steps up to 2 GB of RAM and 32 GB of storage, and can be configured with Ubuntu or Windows 10 IoT. A built-in MicroSD card slot on both models enables additional removable storage.

I/O on the CL200 includes one mini DisplayPort capable of 1080p or 4K resolution, one Gigabit LAN port, and two USB 3.0. The CL210 features two mini DisplayPorts capable of dual 1080P or single 4k resolution, dual Gigabit LAN, two USB 3.0 and also adds a 3.5 mm audio jack. Both systems have an additional USB 2.0 port and RS-232 box header on the bottom, and are configurable with Wi-Fi/Bluetooth and Extrovert 4G LTE capability.

One key application is signage. "We've been working with Logic Supply for the last few years on a range of projects that require highly reliable computer hardware," said Kevin Romano, Executive Vice President and founder of the Christie Experiential Network. "In our industry, reliability is paramount. Every moment of downtime potentially represents significant revenue loss. Employing Logic Supply hardware lets us focus exclusively on creating immersive and engaging OOH solutions for our clients and their content."

In addition to IoT and digital media applications, the CL200 is well suited for data acquisition, automation and network gateway installations that require a reliable, versatile, ultra small form factor device.

The CL200 will be available in Spring 2018, but Logic Supply is currently engaging with customers interested in the CL200 on project requirements and timelines. 

IAR buys into IoT security with Secure Thingz

By Nick Flaherty

Development tool maker IAR Systems is to acquire Secure Thingz, a provider of advanced security solutions for embedded systems in the Internet of Things (IoT). This positions IAR as a frontrunner in offering solutions for security in embedded systems.

Secure Thingz is only two years old and develops and sells products and services for implementation of embedded security in connected devices. The company has provided security solutions for the Renesas Synergy Platform, which is very attractive to IAR.
The company's founder Haydn Povey has held senior management roles including marketing and business development at Arm and is an Executive Board Member of the IoT Security Foundation. He will take over as CEO of Secure Thingz under IAR.

Machina Research predicts there will be 27 billion IoT connections in 2025 facing security challenges such as IP theft, counterfeiting and overproduction, as well as data theft and potentially life threatening sabotage. However only 4 percent of the total number of IoT products available are marketed as secure, says ABI Research. This means the total market for secure microcontrollers for IoT can reach 1.2 billion dollars in value in 2022, with secure IoT products representing almost 20 percent of new IoT units. 

The acquisition follows a 10 percent equity stake in Secure Thingz last April, and last month an Embedded Trust product was launched having been developed jointly with IAR. This enables secure development and makes security part of the development workflow, enabling companies to safeguard intellectual property against overproduction and counterfeiting, manage software updates in a robust way, and protect end users from malware intrusion and theft or loss of data. 

"With the increasing number of connected devices, our customers are facing new challenges. One of the major challenges is how to deliver secure products in a world where even minor failures can lead to major consequences," said Stefan Skarin, CEO, IAR Systems. 

"As a first step, our customers need help mainly protecting themselves against overproduction and IP theft, and we are responding to this need with a new offering that provides possibilities to create modern workflow where security is included from start. The acquisition of Secure Thingz is a step in our increased ambition for future growth through new technology, new markets, new business models, and new relationships. It also secures our position as a frontrunner in a changing industry."

"We are very excited to become a part of the highly competent IAR Systems team," said Haydn Povey, founder and incoming CEO of Secure Thingz. "We have already established a smooth cooperation with the development of Embedded Trust, and our combined resources within technology, sales and customer support will enable us to accelerate the development of the innovative security solutions that the digital products market so desperately needs."

He replaces Krishna Anne, the outgoing CEO. "This strategic partnership with IAR Systems will enable Secure Thingz to provide a wide-ranging solution for secure development, secure provisioning, programming, and secure lifecycle management in partnership with the silicon device vendors, market channel partners and programming partners," said Anne.

Completion of the deal is expected in two weeks.

Monday, March 19, 2018

Power news this week

By Nick Flaherty

. UPS designs suck says open source engineer

. Philips Lighting pushes into solar-powered lighting as it changes name
. VW plans battery partnerships for massive expansion

. Team produces monocrystalline silicon films ten times faster for solar cells

. White graphene provides hydrogen storage boost for fuel cells
. KNX transceiver integrates power for building automation

NEW POWER PRODUCTS . Vitreous resistors for high continuous power dissipation

. 200W converter cuts board space by 40%

. IoT power measurement and profiling tool

Xilinx looks to create new class of progammable device for AI

By Nick Flaherty

FPGA pioneer Xilinx is looking to create a new class of programmable device optimised for machine learning and artificial intelligence (AI) with dynamic reconfigurability.

The Adaptive Compute Acceleration Platform (ACAP) is a multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP’s adaptability, which can be done dynamically during operation, delivers higher levels of performance and performance per-watt than CPUs or GPUs.

ACAP has been under development for four years at an accumulated R&D investment of over $1bn says Peng, with over 1,500 hardware and software engineers at Xilinx designing “ACAP and Everest.”  Software tools have been delivered to key customers. “Everest” will tape out in 2018 with customer shipments in 2019.
The devices are aimed at applications such as video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge and cloud applications. 

The first ACAP product family, codenamed “Everest”, will be developed in TSMC 7nm process technology and will tape out later this year.

“This is a major technology disruption for the industry and our most significant engineering accomplishment since the invention of the FPGA,” says Victor Peng, president and CEO of Xilinx. “This revolutionary new architecture is part of a broader strategy that moves the company beyond FPGAs and supporting only hardware developers. The adoption of ACAP products in the data center, as well as in our broad markets, will accelerate the pervasive use of adaptive computing, making the intelligent, connected, and adaptable world a reality sooner.”

ACAP has – at its core – a new generation of FPGA fabric with distributed memory and hardware-programmable DSP blocks, a multicore SoC, and one or more software programmable, yet hardware adaptable, compute engines, all connected through a network on chip (NoC). 

It also has highly integrated programmable I/O functionality, ranging from integrated hardware programmable memory controllers, high speed SerDes technology and RF-ADC/DACs, to integrated High Bandwidth Memory (HBM) depending on the device variant.

Software developers will be able to target ACAP based systems using tools like C/C++, OpenCL and Python. An ACAP can also be programmable at the RTL level using FPGA tools.

“This is what the future of computing looks like,” says Patrick Moorhead, Founder, Moor Insights & Strategy. “We are talking about the ability to do genomic sequencing in a matter of a couple of minutes, versus a couple of days. We are talking about data centers being able to program their servers to change workloads depending upon compute demands, like video transcoding during the day and then image recognition at night. This is significant.”

Everest is expected to achieve 20x performance improvement on deep neural networks compared to today’s latest 16nm Virtex VU9P FPGA. Everest-based 5G remote radio heads will have 4x the bandwidth versus the latest 16nm-based radios (although much of that will come from the move to 7nm).

Wednesday, March 14, 2018

Intel makes reference hypervisor for IoT devices open source

By Nick Flaherty

The Linux Foundation has launched a project for an embedded reference hypervisor built with real-time and safety-criticality in mind as a framework for an open source embedded hypervisor specifically for the Internet of Things (IoT).

ACRN uses engineering and code contributions from Intel with two main components: the hypervisor and its device model, complete with rich I/O mediators. Intel's experience in virtualization technology was key to the initial development of this hypervisor solution, says the Foundation, but this is a move that will compete with an IoT hypervisor it already owns from Wind River through the VxWorks kernel.

"With project ACRN, embedded developers have a new, immediately available hypervisor option," said Jim Zemlin, executive director of The Linux Foundation. "ACRN's optimization for resource-constrained devices and focus on isolating safety-critical workloads and giving them priority make the project applicable across many IoT use cases. We're pleased to welcome project ACRN and invite embedded developers to get involved in the new community."

Developers benefit from ACRN's small, real-time footprint, which is flexible enough to accommodate different uses and provides consideration for safety-critical workloads. Consolidating a diverse set of IoT workloads with mixed-criticality on to a single platform helps reduce both development and deployment costs allowing for a more streamlined system architecture. An example of this is the electronic control unit (ECU) consolidation in automotive applications. While open source hypervisor options are available today, none share ACRN's vision of an open source hypervisor solution optimized for embedded and IoT products.

"ACRN will have a Linux-based service OS and the ability to simultaneously run multiple types of guest operating systems, providing a powerful solution for workload consolidation," said Imad Sousou, corporate vice president and general manager of the Open Source Technology Center, at Intel® Corporation. "This new project delivers a flexible, lightweight hypervisor, designed to take real-time and safety-critical concerns into consideration and drive meaningful innovation for the IoT space."

ACRN will incorporate input from the open source, embedded, and IoT developer communities and encourages collaboration and code contributions to the project. Early ACRN project members include ADLINK, Aptiv, Intel, LGE, and Neusoft.

"The lack of open source virtualization solutions for embedded, real-time, and safety-critical systems has been greatly hindering consolidation and to some extent the most interesting forms of fog computing," commented Angelo Corsaro, chief technology officer of ADLINK Technology Inc. "The release of ACRN as a Linux Foundation project by Intel will be a game changer as it brings the agility and manageability of virtualized environments into embedded and real-time systems. This will be a key enabler toward making the Industrial Internet of Things happen for real."

"This approach from Intel fits very well within our product roadmap and is a welcomed approach that will meet our customers' desire to have more open source reference solutions," said Lee Bauer, vice president, Mobility Architecture Group of Aptiv. "Aptiv is excited to be a part of this new project, ACRN, and with it usher in a new era of flexibility and scalability for our mobility IoT product solutions."

"Because ACRN will allow for faster feasibility checking of ECU consolidation, it will benefit our growing vehicle components business," said Seongpyo Hong, vice president of LG Electronics. "As a result, we will be able to respond more quickly to OEMs' customized requirements and will continue to play a key role in contributing to the ACRN project."

"As Intel's strategic partner, Neusoft is pleased to join Intel in project ACRN," said Meng Lingjun, vice president of Neusoft Corporation and the general manager of Neusoft Automotive. "ACRN has landed in China's automotive electronics industry with practical implementation. I believe ACRN can meet the development requirements of IoT technology. We're pleased to work with open source communities and introduce ACRN into the ecosystem."
Related stories:

  • Bare metal hypervisor is key to Security 3.0 for AI
  • congatec moves into hypervisor software
  • Open source hypervisor port to MIPS processors brings more security to the IoT
  • SiFive adds linux support to RISC-V CPU cores

Friday, March 09, 2018

MIPS rides the AI wave

By Nick Flaherty

MIPS has used some recent design wins around the AI to remind people its still around in the embedded business following its sell off from Imagination Technologies. 

The newest deal for its 64bit core with Wave Computing (whose CEO Derek Meyer is a former senior exec at MIPS) is the main point, but existing customers such as Mobileye Vision Technologies (now owned by Intel) have also supported the instruction set.

NetSpeed Systems, Fungible, ThinCI and Denso are also using the tech for embedded chips. 

Wave is using the MIPS core for use in its next-generation of deep learning solutions to handle device management and control functions, including real-time operating system (RTOS) and system-on-chip (SoC) subsystem. The embedded MIPS core will boost the dataflow technology that enables fast and efficient processing of neural network graphs.

“MIPS is the established leader in the world of 64-bit embedded cores and consistently delivers leading-edge solutions that drive pioneering products. Today, our core is at the heart of some of the world’s most innovative designs that are fueling the explosive growth of AI,” said David Lau, MIPS’ Vice President of Engineering. “Our hardware Multi-Threading technologies set us apart from others, and the efficiency and extensibility of our processor architecture is well suited to deep learning and AI. While focusing on our core competencies, we are committed to propelling the development of emerging intelligent applications.”

Derek Meyer, CEO of Wave Computing, commented, “The MIPS processor’s 64-bit architecture will enable us to further support the memory address range needed for next-generation AI applications, while its Multi-Threading capabilities enables faster real-time and more efficient task management of our dataflow software agents. In addition, the MIPS ecosystem and development tool environments are well suited for rapidly growing applications like AI.”

The MIPS64 architecture continues to be used in a variety of applications from ADAS, and set-top boxes to networking and telecommunications infrastructure applications. It provides a solid high-performance foundation for future MIPS processor-based development by incorporating powerful features, hardware virtualization capabilities, standardizing privileged mode instructions, supporting past ISAs, and providing a seamless upgrade path from the MIPS32 architecture.

“Our collaboration with MIPS has played a significant role in numerous generations of Mobileye’s EyeQ SoCs for autonomous driving systems helped us achieving consistently best in class performance efficiency. In EyeQ5, Mobileye leverages the latest high-performance, highly efficient 64bit heterogeneous compute clusters of multi-threaded multi-core MIPS CPUs,” said Elchanan Rushinek, Vice President, Engineering at Mobileye. “The I6500-F Core from MIPS has helped us to achieve new level of performance, opening our platform and meeting our un-compromised functional safety goals.”

Thursday, March 08, 2018

FPGA software targets low power applications

By Nick Flaherty

Lattice Semiconductor has launched the latest versio nof its FPGA development software, targetting low power embedded applications. 

Lattice Radiant’s support for the iCE40 UltraPlus FPGAs greatly expands the device’s application across broad market segments including mobile, consumer, industrial, and automotive. The iCE40 UltraPlus devices are the world’s smallest FPGAs with enhanced memory and DSPs to enable
always on, distributed processing.

“Lattice is increasingly witnessing customers who are seeking to benefit from the ultra-low power, small form factor, and low cost features of iCE40 UltraPlus FPGAs,” said Choon-Hoe Yeoh, senior director, software marketing at Lattice Semiconductor. “Lattice Radiant software provides a range of enhancements for designing with iCE40 UltraPlus FPGAs in order to drive innovative designs in emerging embedded applications.”

The IP ecosystem with IEEE 1735 encryption provides core design support for the iCE40 UltraPlus device family for a broad range of applications including IoT sensor bridging, 8:1 microphone aggregation, and face detection.

Key features of the Lattice Radiant software include predictable design convergence with unified design database, design constraint flow and timing analysis and a new design constraint editor that simplifies both logical and physical design constraint editing. It also supports physical to logical design implementation cross-probing.
The Lattice Radiant software is now available for download free of charge.

Wednesday, March 07, 2018

First single chip PCIe NVMe SSD with enterprise-grade data protection

By Nick Flaherty

Silicon Motion Technology has launched a single-chip SSD with a PCIe Gen 3 NVMe 1.3 interface for high-performance mission critical applications. 

The FerriSSD SM689 supports PCIe Gen 3x4 interface while the SM681 supports PCI Gen 3x2 interface - exhibiting sequential read speed of up to 1.45GB/s and sequential write speed of up to 650MB/s. Both products can support multiple capacity configurations ranging from 16GB to 256GB and include enterprise-grade advanced data integrity and reliability capabilities using Silicon Motion’s proprietary end-to-end data protection, ECC and data caching technologies.

These sit alongside the PATA (SM601) and SATA (SM619) single chip SSDs for embedded computing applications in industrial, commercial, enterprise and automotive end-markets. The SSDs are customizable via firmware, and offer enhanced reliability and robust data integrity features that are essential for the extreme operating environments of these applications.

"The FerriSSD storage solutions are popular with automotive and industrial designers, allowing them to replace a hard disk drive with reliable solid-state alternatives," said Nelson Duann, Senior Vice President of Marketing and OEM Business at Silicon Motion. "The addition of a PCIe NVMe interface will enable significantly better performance for applications such as AI and autonomous driving.”

The SM689 and SM681 include end-to-end data path protection, which applies error correction code (ECC) to the SRAM and DRAM buffers as well as to the primary NAND Flash memory array, as well as a DRAM data cache to ensure data programming and enable data redundancy without delaying host processor operations. A Hybrid Zone enables a single disk to be partitioned into single-level cell (SLC) and multi-level cell/three-level cell (MLC/TLC) zones, enabling faster access speeds and data retention. 
Intelligent Scan/DataRefresh protects against the higher data loss from operating at high temperatures while Silicon Motion's NANDXtend technology incorporates a proprietary 4th generation high-performance LDPC ECC engine with RAID, ensuring better data integrity even in extreme physical environments 

The single chip package measures 16mm x 20mm (SM689) and 11.5mm x 13mm (SM681) and supports industrial temperatures from -40 to 85 degrees Celsius. Densities ranging from 16GB to 256GB at launch, and the family in in production.

Code compression for embedded apps

By Nick Flaherty

SEGGER has launched a new member of its code compression software family.

emCompress-ToGo is based around SEGGER's new compression algorithm SMASH, which has been specifically developed for use in embedded systems with almost no RAM.

It features high speed, low memory usage on target compression and decompression with no RAM used other than a buffer holding uncompressed data.

The proven emCompress-Embed and emCompress-Flex products offer support for standard compression schemes such as LZMA, DEFLATE, and LZJU90 without an open source license.

Whilst the emCompress family of three products is geared towards use in embedded systems, it can also be used for other purposes, such as embedding data in PC or other applications. It is proven in various applications, from reducing size of firmware updates, FPGA bitstreams as well as increasing bandwidth of communication channels.

All code has been developed by and can be licensed from SEGGER. It is not encumbered by any Open Source license.

Tuesday, March 06, 2018

Wi-Fi transceivers and modules for the IoT cut power in half

By Nick Flaherty

Silicon Labs introduced a portfolio of Wi-Fi transceivers and modules to simplify the design of power-sensitive, battery-operated Wi-Fi products including IP security cameras, point-of-sale (PoS) terminals and consumer health care devices. 

Optimised for exceptional energy efficiency, the WF200 transceivers and WFM200 modules support 2.4 GHz 802.11 b/g/n Wi-Fi while delivering the high performance and reliable connectivity necessary as the number of connected devices increases in home and commercial networks.

“We’ve delivered the first low-power Wi-Fi portfolio designed specifically for the IoT, enabling breakthroughs in secure, battery-powered connected device designs that simply weren’t possible until now,” said Daniel Cooley, Senior Vice President and General Manager of IoT products at Silicon Labs. “It’s no surprise we’re seeing strong customer demand for Wi-Fi technology that fits within the tight power and space budgets of battery-operated devices, freeing end users from the need to connect to ac power sources.”

“The market for Wi-Fi devices in low-power IoT end node applications is forecast to grow from 128 million units per year in 2016 to 584 million units per year by 2021,” said Christian Kim, senior analyst for IHS Markit, a global business information provider.

Developers can speed time to market and miniaturize battery-operated Wi-Fi products with the WFM200, the world’s smallest pre-certified system-in-package (SiP) module with an integrated antenna. The WF200 transceiver provides a cost-effective option for high-volume applications and gives developers the flexibility to meet unique system design requirements, such as using external antennas.

The WF200 transceiver and WFM200 module have a low transmit (TX: 138 mA) and receive (RX: 48 mA) power with 200 µA average Wi-Fi power consumption (DTIM = 3) contributing to ultra-low system power. The link budget of 115 dBm allows for long-range Wi-Fi transmissions.

The WF200 has a footprint of 4 mm x 4 mm in a QFN32 and 6.5 mm x 6.5 mm in a LGA52 SiP module for space-constrained applications
Security includes secure boot and host interface, hardware cryptography acceleration supporting AES, PKE and TRNG and the modules are pre-certification by the FCC, CE, IC, South Korea and Japan to minimize development time, effort and risk.

Silicon Labs is sampling WF200 transceivers and WFM200 SiP modules to selected customers, and production parts are planned for Q4 2018. 

MISRA-compliant embedded crypto toools target IoT

By Nick Flaherty

MISRA is a safety-critical code-checking standard for C that is more common for automotive designs, with a number of tools implementing the checks (see below) but it is now being applied to crypto for the Internet of Things (IoT).

HCC Embedded has launched a suite of embedded cryptography tools using the MIARA standard so that IoT devices can be managed securely. This provides a complete end-to-end security solution focused on system-level design and software-quality issues that are unfortunately largely unaddressed in the industry.

All the software libraries for CryptoCore are managed through HCC’s Embedded Encryption Manager (EEM), which provides a high-quality standard interface to any hardware or software cryptography implementation. This greatly simplifies the design process, makes software portable, and enables use of software crypto-libraries or hardware-accelerated algorithms on chips that provide them.

Available libraries support authentication, confidentiality, and integrity strategies through Base64, DSS, Elliptic Curve, Ephemeral Diffie-Hellman, MD5, RSA, SHA, DES, and Tiger. EEM provides a universal management interface for software- or hardware-based acceleration, where available, on microcontrollers including NXP Kinetis and i.MX, TI TM4C, and STM32 MCUs. The Renesas RX controllers though are notable by their absence.

Discussions about security tend to focus on the algorithms rather than on real-world issues faced by embedded-networking engineers. Because large-scale hacks of modern algorithms are almost unknown, the pressing engineering challenge is how to design a system that is secure and of quality sufficient to minimize security risks.

HCC handles such issues by providing end-to-end security solutions appropriate for real-world applications and by providing evidence of the quality processes and standards used. All components are compliant with HCC’s rigorous MISRA standards and are provided with full compliance reports. All MISRA rules are applied and any exceptions are identified and explained. Advanced test suites are also available.

“Software defects cause security weaknesses but well established quality processes can be used to minimize the risk to system security,” said HCC Embedded CEO Dave Hughes. “HCC is committed to following strong quality processes and providing the evidence to system designers to help them create more secure networks.”

Mini-STX format for 10GbE micro server for upgradeable IoT edge computing

By Nick Flaherty

congatec has launched  a deployment ready design study of a micro server carrier board with 10GbE support that can be used for edge processing in the Internet of Things. 

The modular server board in the 5x5 inch Mini-STX form factor (140mm x 147mm) offers high scalability across all suitable embedded server processor sockets by using a COM Express Type 7 slot. This enables 10GbE edge node performance upgrades at lowest cost, as nearly all investments in the real-time system design of 10GbE edge nodes can be re-used. 

To upgrade the performance, OEMs and network operators only need to exchange the Server-on-Module. This is especially interesting for operators of 5G networks and edge data centres, who expect real-time performance demands to increase once 10GbE infrastructures become deployed more widely, leading to constantly lower revenues per processed data volume. Also, all IIoT, Industry 4.0 and fog server applications will require continuous performance upgrades as security, analytics and artificial intelligence demands will keep evolving for at least a decade to come.

"Building a 10GbE infrastructure with IIoT, edge, fog or Industry 4.0 servers and 5G small cells for decentralized decision making in real time is only the first step," said Martin Danzer, Director of Product Management at congatec. "Once this infrastructure is established, the performance of these nodes will have to constantly increase as we are only at the beginning of designing such decentralized 10GbE node technologies and the demand for transcoding, security, data capture and analytics capabilities as well as artificial intelligence and real-time communication will continue to grow dramatically."

The edge server board in the 5x5 Mini-STX form factor uses a COM Express Type 7 conga-B7AC module based on the Intel Atom C3000 processor. With processor power consumption starting at just 11W TDP, the system offers 4x real-time 10GbE network performance and up to 16 cores for handling  many smaller packet sizes in parallel. Compared to other multi-core solutions, such as the Intel Xeon D processors, the costs and power consumption here are significantly lower, making it possible to roll out very high network bandwidths and storage capacities into the field.

The congatec micro server carrier board can be equipped with eight different Intel Atom server processor versions - from the 16-core Intel Atom C3958 to the quad-core C3508 processor for the extended temperature range (-40°C to +85°C). All offer up to 48GB of fast 2400 DDR4 memory, which can be designed with or without Error Correction Code (ECC) depending on customer requirements. The 10GbE interfaces are standardly implemented via SFP+ cages, enabling network connection via both fiber optic and copper cables. In addition, the carrier board provides 2x 1GbE and 2x USB 3.0 interfaces for service and peripherals. One of the 1GbE ports is connected to the integrated board management controller and can therefore be used for server-typical remote management tasks.

The  carrier board includes a VGA output and a serial interface for local administration. For custom extensions, it provides three M.2 slots, two for M.2 2280 cards with key M and 4 PCIe lanes or 1x SATA, which makes them particularly suitable for storage media. The third M.2 slot accepts M.2 3042 cards with key A. With 2x PCIe, 1x USB 3.0 and I²C, it can connect both storage media and other peripherals. The feature connectors also provide GPIOs, I²C, SM and LPC buses.

If the Server-on-Modules require active cooling - for example, with a 16-core Intel Xeon D processor - optional CPU and system fans can also be supported and controlled. This means the congatec micro server carrier board in the 5x5 Mini-STX form factor offers the same server-class performance that up to now only fully-featured 19-in rackmount servers were able to provide and can be mounted anywhere and even integrated into autonomous vehicles. 

Monday, March 05, 2018

Startup aims to power real time IoT apps at the edge

By Nick Flaherty

The massive wave of growth and change from the Internet of Things (IoT) cannot be met by traditional cloud and embedded systems. Like mobile and cloud before it, a whole new purpose-built approach to computing for the edge is required, says startup Zededa.

As a result it is building a secure, open source cloud-native approach to real-time edge applications for self-driving cars and industrial robots. It has raised $3m from investors including Ed Zander, former CEO of Motorola and former COO of Sun Microsystems.

“Tomorrow’s edge computing environment will be distributed, autonomous and cooperative. The edge is complex, and not only has to scale out securely, but simultaneously must become friendlier for app developers. That's the problem we are solving at Zededa,” said CEO and Co-Founder Said Ouissal. “True digital transformation requires a drastic shift from today’s embedded computing mindset to a more secure-by-design, cloud-native approach. This will unlock the power of millions of cloud app developers and allow them to digitize the physical world as billions of ‘things’ become smart and connected.”

The company sees an open ecosystem and a completely new technology stack that creates the service fabric essential to achieving the hyperscale that edge computing requires. This will bring together operating systems, virtualization, networking, security, blockchain, cloud and application platforms.

Zededa is currently accepting sign-ups for early access to its platform which will move into customer trials in the first half of 2018.

Power news this week

By Nick Flaherty at 

Microsemi deal takes Microchip over $50bn

By Nick Flaherty

Microcontroller designer Microchip Technology is to buy Microsemi in a surprise $10bn deal that will take the combined company over $50bn in revenues.

The two have signed a definitive agreement acquire Microsemi for $68.78 per share in cash. The acquisition price represents a total equity value of about $8.35 billion, and a total enterprise value of about $10.15 billion, after accounting for Microsemi’s cash and investments.

“We are delighted to welcome Microsemi to become part of the Microchip team and look forward to closing the transaction and working together to realize the benefits of a combined team pursuing a unified strategy. Even as we execute a very successful Microchip 2.0 strategy that is enabling organic revenue growth in the mid to high single digits, Microchip continues to view accretive acquisitions as a key strategy to deliver incremental growth and stockholder value. The Microsemi acquisition is the latest chapter of this strategy and will add further operational and customer scale to Microchip,” said Steve Sanghi, Chairman and CEO of Microchip.

The deal particularly brings Microchip two new product categories - discrete devices and FPGAs.

"Joining forces and combining our complementary product portfolios and end market exposure will offer our customers a richer set of solution options to enable innovative and competitive products for the markets they serve,” said Ganesh Moorthy, President and COO of Microchip. The company's last chip deal was Atmel in 2016.

Microchip anticipates achieving an estimated $300m in savings in the third year.

Subject to approval by Microsemi stockholders and regulatory approvals, the deal is expected to close in the next three months.

Related stories: