Access the latest quantum technology

Quantum technology in Bristol and bath - find out more about how you can access the commercialisation of quantum technology for sensing and security

Thursday, March 26, 2020

Transmitter boost for terahertz sensing

By Nick Flaherty

Researchers at EPFL in Zurich have developed a low cost transmitter for terahertz signals used for sensing. 

The team at the Power and Wide-band-gap Electronics Research Laboratory (POWERlab), led by Prof. Elison Matioli, built a nanoscale device that can generate extremely high-power signals in just a few picoseconds to produces high-power THz waves.

The technology, which can be mounted on a chip or a flexible medium, could one day be installed in smartphones and other hand-held devices. THz waves are largely used for object detection through other materials. 

The device generates high-intensity waves from a spark, with the voltage spiking from 10 V (or lower) to 100 V in the range of a picosecond. The device is capable of generating this spark almost continuously, meaning it can emit up to 50 million signals every second. When hooked up to antennas, the system can produce and radiate high-power THz waves.

The device consists of two metal plates situated very close together, down to 20nm apart. When a voltage is applied, electrons surge towards one of the plates, where they form a nanoplasma. Once the voltage reaches a certain threshold, the electrons are emitted almost instantly to the second plate. This rapid movement enabled by such fast switches creates a high-intensity pulse that produces high-frequency waves.

Conventional electronic devices are only capable of switching at speeds of up to one volt per picosecond. The extremely short rise times down to five picoseconds were only limited by the measurement set-up. By integrating these devices with dipole antennas, high-power terahertz signals with a power–frequency trade-off of 600 milliwatts terahertz squared were emitted, much greater than that achieved by the state of the art in compact solid-state electronics. 

The new device can be more than ten times faster, can generate both high-energy and high-frequency pulses. "Normally, it's impossible to achieve high values for both variables," said Matioli. "High-frequency semiconductor devices are nanoscale in size. They can only cope with a few volts before breaking out. High-power devices, meanwhile, are too big and slow to generate terahertz waves. Our solution was to revisit the old field of plasma with state-of-the-art nanoscale fabrication techniques to propose a new device to get around those constraints."

"These nanodevices, on one side, bring an extremely high level of simplicity and low-cost, and on the other side, show an excellent performance. In addition, they can be integrated with other electronic devices such as transistor. Considering these unique properties, nanoplasma can shape a different future for the area of ultra-fast electronics", said Mohammad Samizadeh Nikoo, a PhD student at the POWERlab.

The technology could have wide-ranging applications beyond generating THz waves. The ease of integration and the compactness of the nanoplasma switches could enable their implementation in several fields, such as imaging, sensing, communications and biomedical applications.
 "We're pretty sure there'll be more innovative applications to come," adds Matioli.

Tuesday, March 24, 2020

Industrial Temperature 2TB NVMe and SATA M.2 solid state drives

By Nick Flaherty

Greenliant has started volume production of its industrial temperature (-40°C to +85°C) 2 Tbyte NVMe and SATA M.2 solid state drive (SSD) modules. 

Built in the 2280 form factor, and offered with hardware encryption and on-board DRAM, the ArmourDrive SSDs save space, improve security and increase capacity for a wide variety of applications.

The 87 PX Series SATA and 88 PX Series NVMe M.2 drives have 240 GB, 480 GB, 960 GB and 1.92 TB with the industrial temperature range. The built-in ECC bit error detection and correction is optimised for 3D three-bit-per-cell NAND devices and advanced flash management extends drive life through dynamic and static wear leveling so the cells are used equally. It also supports AES-256 / TCG OPAL encryption and Secure Erase.

“Customers rely on our wide and deep selection of quality solid state storage products, and Greenliant is pleased to be one of the first companies to offer I-temp 2 Terabyte M.2 SSDs for industrial applications that require higher capacities,” said Arthur Kroyan, vice president of business development and marketing, Greenliant. “With on-board DRAM and advanced security features, these products deliver consistent sustained performance and strong user data protection, which can be important advantages for certain embedded systems.”

Greenliant is shipping the full line at

SMARC2.1 revision drives MIPI into embedded vision

By Nick Flaherty

SGET has approved the new SMARC 2.1 specification, aiming to drive the adoption of the MIPI standard into the embedded vision market.

The latest version adds additional features such as SerDes support for extended edge connectivity and up to four MIPI-CSI camera interfaces to meet the increasing demand for a fusion of embedded computing and embedded vision. The new features are backward compatible with Rev. 2.0, which means that 2.1 modules can be integrated on 2.0 carriers. All extensions to Rev.2.0 are also optional, so SMARC 2.0 modules are automatically compatible with SMARC 2.1.

"The new SMARC 2.1 specification is an important step towards embedding MIPI-CSI camera technology, which is widely used in smartphones, firmly and for the first time within the standard of an embedded computing specification," said Christian Eder, Director Marketing at congatec and SGET editor of the SMARC 2.1 specification. "We need this extremely cost-effective technology in order to be able to integrate it into any embedded application. For this purpose, SMARC 2.1 provides not only one or two, but up to four interfaces for comprehensive situational awareness and highest device efficiency."

Demand for machine vision cameras is growing at double-digit rates, particularly for applications such as surveillance, forensics, robotic surgery, intelligent traffic systems, border control and health monitoring. In addition, camera technology continues to be used for process inspections to reduce errors such as incorrect fill levels, faulty products in the production line and packaging defects. Autonomous logistics vehicles also take up a large market share in the industrial sector.

With comprehensive Ethernet support for more connectivity at the edge gaining increasing significance, two of the four supported PCIe lanes now offer two additional Ethernet ports via SerDes signals. These can also be used for vision through the connection of GigE vision cameras.

Other new features include PCIe clock request signals, which can be used to switch off unused PCIe lanes to save power, and 14 instead of 12 GPIOs (General Purpose Input/Output). In response to many requests, the specification document was also completely restructured to optimize readability.

Further information on the new SMARC 2.1 specification can be found at SGET

Additionally, congatec prepared an updated white paper on the advantages of the SMARC specification which is available for download at:

Further information on the SMARC 2.1 compatible conga-SMX8 with Arm based NXP i.MX 8 processors can be found at:

Further information on the SMARC 2.1 compatible conga-SA5 with Intel Atom processors (code name Apollo Lake) can be found at:

Monday, March 16, 2020

Boosting the UK's ventilator production for Covid-19

By Nick Flaherty on eeNews Europe

The UK’s only ventilator manufacturer is ramping up production to meet demand to tackle the Covid-19 Coronavirus crisis as the UK government calls for assistance from the industry.

“We have increased our capacity, and moved to 7 days working across our global manufacturing sites where required to meet demands from our customers,” said Sally Cozens, managing director at Breas UK. She declined to say that that capacity is, other than “we are increasing to meet demands globally”.

Breas UK claims to be the only UK ventilator manufacturer, making a range called Nippy, while its parent company Breas also makes other medical equipment. Breas is headquartered in Sweden and owned by the Chinese Fosun conglomerate, and has 150 staff worldwide.

The German government has reportedly ordered 10,000 ventilators from Draegerwerk in Luebeck, while Hamilton Medical in Switzerland has ramped up production by 30% to a run rate of 20,000 a year.

Medical 3D printing firm Open Bionics has also offered its help. “We build medical devices, have technicians, engineers, processes and assembly lines in place and ready to go. We need parts and build instructions,” said Sam Payne, chief operating officer and co-founder of Open Bionics in Bristol, UK.

Other suggestions of companies with medical equipment skills have included Meditec England, Smiths Medical, SLE, Diamedica, OES Medical and Penlon.

Friday, March 06, 2020

Silicon Labs launches security tech for system-on-chip designers

By Nick Flaherty

Silicon Labs has developed physically unclonable function (PUF) hardware to reduce the risk of IoT security breaches and compromised intellectual property in system-on-chip designs. The first SoC devices will be launched later this month.

The Secure Vault technology is a suite of security features designed to help connected device manufacturers address Internet of Things (IoT) security and data threats. It includes a dedicated core, bus and memory, is separate from the host processor. This hardware separation isolates critical features, such as secure key store management and cryptography, into their own functional areas, making the overall device more secure. 

This will be used in the Wireless Gecko Series 2 platform that will be launched by the end of Q2 2020.

"The security landscape is changing rapidly, and IoT developers face increasing pressure to step up device security and meet evolving regulatory requirements," said Matt Johnson, senior vice president and general manager of IoT product at Silicon Labs. "Secure Vault simplifies development, accelerates time-to-market and helps device makers future-proof products by taking advantage of the most advanced integrated hardware and software security protection available today for IoT wireless SoCs."

"Embedded security is a key requirement for IoT products, and software updates alone cannot address all vulnerabilities present in insecure hardware," said Tanner Johnson, senior cybersecurity analyst at Omdia. "As a result, hardware components can comprise the front line of defense for device security, especially with new legislation targeting IoT product security."

Secure Vault is also aimed at devices addressing emerging regulatory measures, such as GDPR in Europe and SB-327 in California, making it secure updates to connected devices possible over-the-air (OTA) throughout the product lifecycle.
One of the biggest challenges for connected devices is post-deployment authentication. The trust provisioning service with optional secure programming provides a secure device identity certificate during chip manufacture for each individual silicon die.

Keys are encrypted and isolated from the application code, and virtually unlimited secure key storage is offered as all keys are encrypted using a master encryption key generated using a PUF. The power-up signatures are unique to a single device, and master keys are created during the power-up phase to eliminate master key storage, further reducing attack vectors.

Advanced Tamper Detection includes easy-to-implement product enclosure tamper resistance to sophisticated tamper detection of silicon through voltage, frequency and temperature manipulations. Hackers use these changes to force hardware or software to behave unexpectedly, creating vulnerabilities for glitch attacks. Configurable tamper-response features enable developers to set up appropriate response actions with interrupts, resets, or in extreme cases, secret key deletion.

Silicon Labs is currently sampling new Secure Vault-enabled wireless SoCs, which are planned to be released in late Q2 2020.

Silicon Labs
Silicon Labs (NASDAQ: SLAB) is a leading provider of silicon, software and solutions for a smarter, more connected world. Our award-winning technologies are shaping the future of the Internet of Things, Internet infrastructure, industrial automation, consumer and automotive markets. Our world-class engineering team creates products focused on performance, energy savings, connectivity and simplicity.

Follow Silicon Labs at, at, on Twitter at, on LinkedIn at and on Facebook at

Wednesday, March 04, 2020

CEVA lays claim to world's most powerful DSP

By Nick Flaherty

CEVA has launched its fourth generation digital signal processor architecture that it says is the world's most powerful.

The scalar and vector processing in the XC architecture supports double 8-way VLIW and up to 14,000 bits of data level parallelism. All this supports performance of up to 1,600 GOPS, dynamic multithreading and advanced pipeline to reach operating speeds of 1.8GHz at 7nm.

The architecture, available as IP for a system-on-chip (SoC), is aimed at complex parallel processing workloads required for 5G endpoints and Radio Access Networks (RAN), enterprise access points and other multigigabit low latency applications.
The design is fully synthesizable, and the multithreading design allows the processors to be dynamically reconfigured as either a wide SIMD machine or divided into smaller simultaneous SIMD threads. 

CEVA has also developed a new memory subsystem with a wide 2048-bit memory bandwidth, with coherent, tightly-coupled memory to support efficient simultaneous multithreading and memory access.

"The dynamically reconfigurable multithreading and high speed design, along with comprehensive capabilities for both control and arithmetic processing, sets the foundation for the proliferation of ASICs and ASSPs for 5G infrastructure and endpoints," said Mike Demler, Senior Analyst at The Linley Group.

The first processor based on the Gen4 CEVA-XC architecture is the multicore CEVA-XC16 for 5G RAN architectures including Open RAN (O-RAN), Baseband Unit (BBU) aggregation as well as Wi-Fi and 5G enterprise access points. The CEVA-XC16 is also applicable to massive signal processing and AI workloads associated with base station operation.

The XC16 has been specifically designed for the 3GPP release specifications in mind, building on CEVA's experience with leading wireless infrastructure vendors for their cellular infrastructure ASICs. The previous generation CEVA-XC4500 and CEVA-XC12 DSPs are used in chips in 4G and 5G cellular networks today, and the CEVA-XC16 is already in design with a leading wireless vendor for their next-generation 5G ASIC.

The XC16 can be reconfigured as two separate parallel threads running simultaneously, sharing their L1 Data memory with cache coherency, which directly improves latency and performance efficiency for PHY control processing, without the need for an additional CPU. This boosts the performance per square millimeter by 50% compared to a single-core/single-thread architecture when massive numbers of users are connected in a crowded area. This amounts to 35% die area savings for a large cluster of cores, as is typical for custom 5G base station silicon.

Other key features in the CEVA-XC16 include the latest dual CEVA-BX scalar processor units, dynamic allocation of vector units resources to processing threads and scalar control architecture and tools that reduce code size by a third through dynamic branch prediction and loop optimizations alongside an LLVM based compiler.

The XC16 introduces a new instruction set architectures for FFT and FIR operations common in wireless systems that doubles performance, with a simple software migration path from previous generations CEVA-XC4500 and CEVA-XC12 DSPs.

"5G is a technology with multiple growth vectors spanning consumer, industrial, telecom and AI. Addressing these fragmented and complex use cases requires new thinking and practices for processors," said Aviv Malinovitch, Vice President and General Manager of the Mobile Broadband Business Unit at CEVA. "Our Gen 4 CEVA-XC architecture encapsulates this new approach, enabling never-before-seen DSP core performance through groundbreaking innovations and design. The CEVA-XC16 DSP is evidence of this and serves to substantially reduce the entry barriers for OEMs and semiconductor vendors looking to benefit from the growing 5G Capex and Open RAN network architectures."

The CEVA-XC16 is available for general licensing starting in Q2 2020. 

Tuesday, March 03, 2020

All-band IoT antenna is nearly the size of a grain of rice

By Nick Flaherty

Fractus Antenna in Barcelona have developed an antenna for designs in the Internet of Things (IoT) with a volume of just 21 mm³, not much more than a grain of rice (see picture)

The ONE mXTEND is designed to provide worldwide 5G and cellular IoT connectivity in a miniature and ultra-slim antenna component avoiding the usual problem with size restrictions. The same antenna can be used to cover selected frequency bands within the standards 2G, 3G, 4G and 5G all in one antenna package.

This multiband miniature antenna measures 7.0 x 3.0 x 1.0 mm enables coverage at multiple cellular bands within 824 to 5000 MHz and gives a wireless designer the smallest volume antenna for cellular IoT and 5G. It is an off the shelf component ready to be assembled into any wireless device as any other chip is mounted.

Any engineer in the middle of a wireless design can also test the new ONE mXTEND by using NN’s Wireless Fast Track Service and get a ready to test antenna design, free of charge in 24h.

Thursday, February 27, 2020

Research tool uses machine learning to predict how fast code will run

By Nick Flaherty

Researchers at MIT's CSAIL lab in the US have developed a machine-learning tool that predicts how fast computer chips will execute code from various applications.

Compilers typically use performance models that run the code through a simulation of given chip architectures and use that for the code optimisation. Developers can then go in and work on the bottlenecks that slow down the operation.

However the performance models for machine code are handwritten by a relatively small group of experts and are not necessarily completely validated, which can be an issue. This means that the simulated performance measurements often deviate from real-life results.

The machine learning pipeline that automates this process, making it easier, faster, and more accurate. The Ithemal tool is a neural-network model that trains on labelled data in the form of “basic blocks” — fundamental snippets of computing instructions — to automatically predict how long it takes a given chip to execute previously unseen basic blocks. The results suggest this performs far more accurately than traditional hand-tuned models.

The researchers presented a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive. During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself using over 3,000 pages describing its chips’ architectures. 

“Intel’s documents are neither error-free nor complete, and Intel will omit certain things, because it’s proprietary,” says co-author Charith Mendis, a PhD student at CSAIL. “However, when you use data, you don’t need to know the documentation. If there’s something hidden you can learn it directly from the data.”

In training, the Ithemal model analyzes millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation. Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip, and will generate a single number indicating how fast the chip will execute that code. 

To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions — basically, the sequence of boot-up, execute, and shut down — without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks.

The researchers found Ithemal cut error rates in accuracy by 50 percent over traditional hand-crafted models, reducing to 10 percent, while the Intel performance-prediction model’s error rate was 20 percent on a variety of basic blocks across multiple different domains.

The tool should allow developers to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs, says Mendis. For instance, domain-specific architectures, such as Google’s Tensor Processing Unit used specifically for neural networks, can be analysed. “If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,” said Mendis.

“Modern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,” says co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS). “This tool is a big step forward toward fully modeling the performance of these chips for improved efficiency.”

In a paper presented at the NeurIPS conference, the team proposed a new technique to automatically generate compiler optimizations. Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal outperforms hand-crafted vectorization algorithms used in the LLVM compiler.

Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it’s not really clear why a particular model made its predictions. “Our model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we’re trying to figure out why,” said Carbin. “That’s a fine level of granularity that would be amazing for these types of tools.”

They also hope to use Ithemal to enhance the performance of Vemal even further and achieve better performance automatically.

Wednesday, February 26, 2020

Qualcomm demos 3Gbit/s WiFI 6E at 6GHz

By Nick Flaherty

Qualcomm Technologies has demonstrated the next generation of WiFi operating at 6GHz, above today's current bands.

Despite the attempt to simplify the marketing names of WiFi generations with the move to WiFi6 away from the a,b,g,ad names, this next generation is being called WiFi 6E.

The demo at 6GHz uses Qualcomm's FastConnect mobile connectivity subsystem and Networking Pro Series Wi-Fi Access Point platforms. 

The new approach, which is still waiting for regulatory approval for the 6GHz band, supports numerous 160 MHz channels and advanced modulation techniques to boost the data rate to 3Gbit/s. Qualcomm likes it as this also gives the opportunity to add extra (non-standard) end-to-end Wi-Fi 6 features to differentiate its offerings.

“Building on our deep technology expertise and industry-proven feature superiority, Qualcomm Technologies is again poised to usher in a new era of Wi-Fi performance and capability with the addition of 6 GHz spectrum, or Wi-Fi 6E,” said Rahul Patel, senior vice president and general manager, connectivity and networking at Qualcomm Technologies. “Once the spectrum is allocated, Wi-Fi 6E is primed to solve for modern connectivity challenges and create new opportunities for the next generation of devices and experiences.”

Commercially available Wi-Fi 6 devices are now based on Qualcomm's Snapdragon 865 Mobile Platform. The latest FastConnect 6800 subsystem is capable of delivering a new class of Wi-Fi speed (approaching 1.8 Gbps) even in densely congested environments. This has support for uplink and downlink MU-MIMO (supporting up to 8 stream scenarios), OFDMA and 1024 QAM extended across 2.4 and 5GHz bands, and latency reducing optimizations. 

The Networking Pro Series provides networking processing and deterministic resource allocation through multi-user algorithms for OFDMA and MU-MIMO, and has been used in 200 designs shipping or in development.

Qualcomm's Wi-Fi 6E page.

Related WiFi articles on the Embedded blog:

Monday, February 24, 2020

Researchers hack MobilEye camera chip with tape

By Nick Flaherty

Researchers at McAfee Advanced Threat Research (ATR) have hacked the machine learning algorithms in a MobilEye camera chip used in Tesla cars.

The team looks at model hacking, the study of how hackers could target and evade artificial intelligence, with a focus on the broadly deployed MobilEye camera system. This is used in over 40 million vehicles, including Tesla models that implement Hardware Pack 1.

The team looked at ways to cause misclassifications of traffic signs and were able to reproduce and significantly expand upon previous research that focused on stop signs, including both targeted attacks, which aim for a specific misclassification, as well as untargeted attacks, which don’t prescribe what an image is misclassified as, just that it is misclassified. The team were successful in creating extremely efficient digital attacks which could cause misclassifications of a sign, 

They used physical stickers, shown below, that model the same type of perturbations, or digital changes to the original photo, which trigger weaknesses in the classifier and cause it to misclassify the target image.Targeted physical white-box attack on stop sign, causing custom traffic sign classifier to misclassify the stop sign as an added lane sign

This set of stickers has been specifically created with the right combination of colour, size and location on the target sign to cause a robust webcam-based image classifier to think it is looking at an “Added Lane” sign instead of a stop sign.

The team then repeated the stop sign experiments on traffic speed limit signs.
Physical targeted black-box attack on speed limit 35 sign resulting in a misclassification of the sign to a 45-mph sign

Black-box attack on the 35-mph sign, resulting in a misclassification of 45-mph sign. This attack also transfers on state-of-the-art CNNs namely Inception-V3, VGG-19 and ResNet-50

After testing in the lab using a high resolution webcam, the team took the technology out onto the road. A 2016 Model “S” and a 2016 Model “X” Tesla with MobilEye's EyeQ3 camera chip were tested. The adversarial stickers convinced the Tesla Head Up Display (HUD) that the speed limit was 85mph. 

These adversarial stickers cause the MobilEye on Tesla Model X to interpret the 35-mph speed sign as an 85-mph speed sign

The lab tests developed attacks that were resistant to change in angle, lighting and even reflectivity to emulate real-world conditions, reducing stickers from 4 adversarial stickers in the only locations possible to confuse our webcam, all the way down to a single piece of black electrical tape, approximately 2 inches long, and extending the middle of the 3 on the traffic sign.A robust, inconspicuous black sticker achieves a misclassification from the Tesla model S, used for Speed Assist when activating TACC (Traffic Aware Cruise Control)

Even to a trained eye, this hardly looks suspicious or malicious, and many who saw it didn’t realise the sign had been altered at all. This tiny piece of sticker was all it took to make the MobilEye camera’s top prediction for the sign to be 85 mph.

The vulnerability comes from the fact that the Tesla Automatic Cruise Control (TACC) can use speed limit signs as input to set the vehicle speed. A software release for TACC shows that the data is fed into the Speed Assist feature, which was rolled out by Tesla in 2014.

McAfee ATR’s lead researcher on the project, Shivangee Trivedi, partnered with another vulnerability researcher and Tesla owner Mark Bereza to link the TACC and Speed Assist technologies. On approaching the hacked sign, the Tesla started speeding up to the new speed limit.

The number of tests, conditions, and equipment used to replicate and verify misclassification on this target were published by McAfee in a test matrix.

The team points out that this was achieved on an earlier versions (Tesla hardware pack 1, mobilEye version EyeQ3) of the MobilEye camera platform. A 2020 vehicle implementing the latest version of the MobilEye camera did not appear to be susceptible to this attack vector or misclassification. The newest models of Tesla vehicles do not implement MobilEye technology any longer, and do not currently appear to support traffic sign recognition. 

However the vulnerable version of the camera continues to account for a sizeable installation base among Tesla vehicles.

The video of the testing is at

Wednesday, February 19, 2020

Energy efficient silicon ships for edge AI

By Nick Flaherty

Eta Compute has shipped the first production version of its ECM3532 embedded AI processor.

The multicore chip uses a patented technology called Continuous Voltage Frequency Scaling (CVFS) for power consumption of microwatts for many sensing applications.

The Neural Sensor Processor (NSP) for local machine learning in always-on image and sensor applications at the edge of the Internet of Things (IoT). The self-timed CVFS architecture automatically and continuously adjusts internal clock rate and supply voltage to maximize energy efficiency for the given workload, typically 100μW.

The chip combines an ARM Cortex-M3 processor with 256KB SRAM and 512KB Flash as well as a 16b Dual MAC DSP with 96KB dedicated SRAM, both with CVFS, with flash memory, SRAM, I/O, peripherals and a machine learning software development platform.  A Neural Development SDK with TensorFlow interface provides the ML model integration.

“Our Neural Sensor Platform is a complete software and hardware platform that delivers more processing at the lowest power profiles in the industry. This essentially eliminates battery capacity as a barrier to thousands of IoT consumer and industrial applications,” said Ted Tewksbury, CEO of Eta Compute. “We are excited to see the first of many applications our customers are developing come to market later this year.”

“We believe that power consumption, latency and data generation combined with RF transmission are all factors limiting many sensing applications," said Jim Feldhan, president and founder at Semico Research. "It’s great seeing Eta Compute’s platform coming into the market. Their technology is orders of magnitude more power-efficient than any other technology I have seen to date and it will certainly make AI at the edge a reality.”
“It’s exciting to see innovative products for low power machine learning being launched at tinyML where experts from the industry, academia, start-ups and government labs share the innovations to drive the whole ecosystem forward,” said Pete Warden, Google Researcher and General Co-chair of the tinyML organization.

“We are amazed by the ECM3532 and its efficiency for machine learning in sensing applications,” said Zach Shelby, CEO of Edge Impulse. “It is an ideal fit for our TinyML lifecycle solution that transforms developers’ abilities to deploy ML for embedded devices by gathering data, building a model that combines signal processing, neural networks and anomaly detection to understand the real world.”

“Himax Imaging HM01B0 and new HM0360 are among the industry’s lowest power image sensors with autonomous operation modes and advanced features to reduce power, latency and system overhead. Our image sensors can operate in sub-mW range and when paired with the low power multi-core processors such as Eta Compute’s ECM3532, developers can quickly deploy edge devices that perform image inference under 1mW,” said Amit Mittra, CTO of Himax Imaging.

The ECM3532 is packaged in a 5 x 5 mm 81 ball BGA.

Wednesday, January 15, 2020

Wind River buys embedded security specialist Star Lab

By Nick Flaherty

Highlighting the vital importance of security in embedded systems, Wind River has acquired a US company that specialises in software for Linux cybersecurity and anti-tamper, virtualization, and cyber resiliency.

Star Lab has developed a system protection and anti-tamper toolset for Linux, a secure open source-based hypervisor, and a secure boot solution: 

  • Security Suite: The suite offers robust Linux cybersecurity and anti-tamper capabilities for operationally-deployed Linux systems and distributions.
  • Embedded Hypervisor: Designed specifically for use in open, hostile computing environments, the Xen-based hypervisor offers a secure open source virtualization solution for embedded mission systems.
  • Secure Boot: A measured-boot solution ensures that firmware and boot code is legitimate and has not been maliciously modified or manipulated.
Historically, embedded devices have functioned in isolation, deployed to environments minimally connected to the outside world. However, with the emergence of ubiquitous connectivity paradigms such as IoT and remotely monitored/autonomously controlled industrial and transportation systems, today's cyber threat landscape is rapidly evolving. Central to this evolution is the ease with which a focused and resourced adversary can acquire and reverse engineer deployed embedded systems. In addition to modification or subversion of a single specific device, hands-on physical access also aids an attacker in discovery of remotely-triggerable software vulnerabilities.

"The Star Lab offering is a perfect complement and extension to the Wind River portfolio, and addresses a growing trend where Linux cybersecurity and anti-tamper capabilities are becoming a requirement across industries such as aerospace, automotive, defense, and industrial," said Jim Douglas, president and CEO of Wind River. "Our customers want to create security-based differentiation in their product lines using a multi-layer security approach; by combining the security- and Linux-related strengths of both companies, we believe we will be able to deliver immediate increased value and a competitive advantage."

The Star Lab software is developed with a secure-by-design engineering philosophy, leveraging design patterns that reduce attack surface, isolate critical functionality and contain or mitigate even successful attacks. Its products, which are conformant with NIST 800-53 technical controls for US federal information systems and consistently pass independent verification/validation testing.#

"With advances in technology far outpacing corresponding advances in security, the Star Lab security philosophy is to assume compromise and design a system that prioritizes protectability, resiliency, and recoverability," said Irby Thompson, CEO of Star Lab. "Like Wind River, the Star Lab portfolio was launched with the sole focus of building products that are uncompromising in their ability to protect mission-critical systems. Becoming part of the Wind River family will not only strengthen our value and offerings to our current aerospace and defence clients, but also allow us to scale to new opportunities faster, as well as expand our reach into new vertical markets."

Monday, January 13, 2020

Bluetooth module deal aims to boost battery life

By Nick Flaherty

Atmosic Technologies has teamed up with IoT module developer Tonly Electronics on a Bluetooth Low Energy (BLE) module to extend battery lie.

The TBMO2 module will use Atmosic’s M2 system-on-chip (SoC), which implements 'Lowest Power Radio' and On-Demand Wake Up technologies, to enable significantly longer battery life for IoT applications. 

“We built our M2 solutions from the ground up to revolutionize the IoT market with ‘forever battery life.’ We are excited to enable device makers to more quickly and easily develop smart devices from smart home products to wearables to consumer electronics with our cutting-edge low power and wake up technologies,” said David Su, CEO of Atmosic. “Our collaboration with Tonly for the TBMO2 module is another step forward to dramatically reducing IoT devices’ dependence on batteries.”

“The TBMO2 module is a true game changer for the IoT market, providing a compact, highly integrated module for the development of next generation IoT devices with extended battery life,” said Tonly’s SVP, David Huang. “This new solution continues our commitment to providing companies with total wireless solutions to bring consumers the latest features like voice computing and AI.”

The module is aimed at wearables, smart home products (including lighting, plugs and switches), consumer electronics, smartphones and tablets, medical devices, logistics and tracking sensors, building and environment monitoring applications, human-machine interface devices, RFID tags and security badges and more. 

Tonly Electronics Holdings in Hong Kong develops audio and video products as well as wireless smart interconnectivity products.

ARM's digital twin deal ... Sony builds an electric car ... Dialog looks to switched capacitor designs

Power news this week from eeNews Europe By Nick Flaherty

. ARM deal boosts digital twin design of cars

. Maxim, Monolithic Power battle over DC-DC converter patents

. ABB rolls out second phase of high power fast charger network across Europe

. TT Electronics buy boosts US defence power supply business

. Sony shows concept electric vehicle platform

. Energy storage system uses solid state battery

. Dialog Semi looks to switched capacitor for DC-DC converters

. 40W DC-DC converter targets industrial use

. SMT DC-DC converter series with 1W single and dual outputs

. PMIC shrinks power supplies for displays


. Easily Analyze EMI problems with oscilloscopes

Flex teams to boost supply chain for IoT sensor nodes

By Nick Flaherty

Contract manufacturer Flex has teamed with QuickLogic and Infineon Technologies on a design kit and a System-in-Package (SiP) for high volume production of Internet of Things (IoT) devices. 

The 12 x 12mm SiP is part of the FLEXino Sensor Fusion Development Kit for rapid prototyping of sensor fusion IoT products requiring audio, pressure and motion sensing, Bluetooth and WiFi capabilities. The aim is to move the designs quickly into volume production.

The development kit contains an ESP32 controller board with Bluetooth and WiFi connectivity and a sensor fusion daughter board, created in collaboration with QuickLogic and Infineon. The daughter board integrates QuickLogic’s EOS S3 SoC platform with Infineon’s DPS310 digital barometric pressure sensor and IM69D130 digital MEMS microphone in addition to a 6-axis IMU and a 64Mb SPI flash.

This daughter board has been miniaturised to create the SiP using Flex proprietary packaging processes and this frees the selected host processor from the sensor fusion workloads. The Flex proprietary SiP form factor can be customised for different sensor fusion applications and easily integrates into new or existing products.

The kit is compatible with the Adafruit Feather ecosystem and the hardware configuration addresses several use cases, as the sensor fusion algorithm is software configurable.

“The FLEXino Sensor Fusion Development Kit and SiP are designed to help bring a variety of next-generation and existing IoT devices to market faster,” said Dave Gonsiorowski, vice president of  Innovation Services & Solutions at Flex. “A significant challenge with IoT design today is the availability of flexible integrated development kits that easily transition to high-volume manufacturing. We are proud to have collaborated with QuickLogic and Infineon and provide a solution that enables customers across several industries to design products at unprecedented speeds.”

“Flex and QuickLogic worked closely together to integrate our EOS S3 SoC ultra-low power voice and sensor processing platform with their FLEXino Sensor Fusion Development Kit and SiP,” said Brian Faith, chief executive officer of QuickLogic. “The development kit includes a sensor fusion daughter board featuring the EOS S3 SoC, and the SiP version miniaturizes the same functionality into a single package that can be used for volume production. By working with Flex and Infineon, we are able to deliver sensor fusion customers a complete and seamless development path.”

“The collaboration is another step by Infineon to make the products we use every day smarter using our advanced sensor and sensor fusion software technologies,” said Philipp Von Schierstaedt, vice president and general manager, Radio Frequency and Sensors at Infineon Technologies. “The development kit combined with Infineon’s digital sensors and microphones delivers seamless, effortless interactions between people and the next generation of IoT devices across multiple industries.”

The FLEXino Sensor Fusion Development Kit is available now, and the System-in-Package is scheduled to be available the first quarter of 2020. 

Sunday, January 12, 2020

MicroEJ teams with NXP for wearables

By Nick Flaherty

MicroEJ has teamed up with NXP to put its virtualisation technology on small processors for wearable devices. 

The deal means the MicroEJ Virtual Execution Environment (VEE) can be used on the latest i.MX RT devices that are aimed at appliances and wearables.

Constrained by a small, potentially round screen size, and a high pressure on electronic costs, wearable manufacturers have to constantly juggle between the quality of the graphical user interfaces and the processor size. MicroEJ’s virtualisation software makes design easier and faster: programmers quickly develop their product on virtual devices with fast iteration cycles, reuse user interface components, and cutting-edge languages technologies to design sharp graphics and fonts in vector format, smooth scrollbars and animations, with easy to manage activity applications for best-in-class user experience.

MicroEJ VEE extends across NXP’s MCU and microprocessor extensive portfolio -  from the LPC and Kinetis MCUs to the i.MX RT crossover MCUs and i.MX applications processors, optimised for for each specific design. The memory footprint of under 50KB provides energy saving by using hardware accelerators, multi-cores, its smart RAM optimization, its  low deep sleep consumption, and its optimal BSP integration.

The combined solution includes an application store in order to store, reuse, share and deliver software components (virtual devices, hardware device references and applications), compatible across NXP’s portfolio. It allows customers to create secure app-oriented connected ecosystem, speed-up their time-to-market and scale-up product functionality in order to stay in touch with customer expectations.

“MicroEJ technology is currently used on millions of IoT devices across the world, and the demand is expanding amongst manufacturers,” said Dr. Fred Rivard, CEO of MicroEJ. “By using the combined solutions from both NXP and MicroEJ, a customer can now achieve highly desirable IoT products, produced in less time, at a very attractive price.”

“MicroEJ’s virtualisation technology with NXP’s i.MX RT crossover MCU brings a highly integrated solution for wearable and appliance applications,” said Joe Yu, Vice President and General Manager of the Low-Power MPU & MCU Product Line at NXP Semiconductors .“This collaboration helps free customers from the complexity of design, allowing them to spend more time in other areas and accelerate their time to market.”

Wednesday, January 08, 2020

Murata uses Google TPU for smallest AI edge module

By Nick Flaherty

Murata Electronics has launched the world’s smallest artificial intelligence (AI) module using a chip from Google.

The Coral Accelerator Module uses Google’s Edge TPU ASIC for processing at the edge. Miniaturization is key as all board space must be optimized to achieve highly robust functionality in space constrained operations. The result of this collaboration is a solution that speeds up the algorithmic calculations required to execute AI.

“The Coral Accelerator Module is a brilliant offering, and Murata delivers an important building block for the AI Edge ecosystem. This module is a game-changer in enabling the next generation of intelligent devices. The trust Google placed in our technology, process, and material leadership speaks volumes about the robustness of Murata’s Multi-Chip Module process,” stated Sean Kim of Murata’s Connectivity product marketing group.

“Coral enables new applications of on-device AI across many industries, from manufacturing to healthcare to agriculture. Working with Murata to make the Coral Accelerator Module – with Google Edge TPU – available in a robust, solderable, and easy-to-integrate package means that more customers can include Coral intelligence inside their products in more environments,” said Vikram Tank, Product Manager for Coral.

The goal of Coral is to enable AI applications running at the device level to quickly move from prototype to production. Coral provides the complete toolkit of hardware components, software tools, and pre-compiled models for building devices with local AI. The AI module is an integral part of the fully integrated Coral platform, which can be implemented in a myriad of applications across numerous industries.

Murata worked closely with Coral to ensure that the AI module helped enable the flexibility, scalability, and compatibility for integration into applications deploying the Coral technology. Toward this end, Murata leveraged its global resources and decades of R&D in the areas of high-density design and component integration.

The Coral Accelerator Module will be available for sale in early 2020 through the Coral website at