Access the latest quantum technology

Quantum technology in Bristol and bath - find out more about how you can access the commercialisation of quantum technology for sensing and security

Monday, June 24, 2019

Doubling power density of lithium batteries and solid state cells ... Siemens sells of electric aircraft motor business

Power news this week by Nick Flaherty at eeNews Europe

. Siemens sells electric aircraft propulsion business to Rolls-Royce

. Pure electric cars to accelerate the demise of plug-in hybrids

. Schneider Electric launches its first US smart factory

. Carbon nanotubes breakthrough doubles energy density of batteries

. Imec doubles energy density of solid state battery


. 800V boost for bus converter module

. Encapsulated AC-DC power supply series for PCB mounting

. 650W modular AC-DC power supply shrinks in size


. NI-Key Considerations for Powertrain HIL Test

. Forward or Flyback? Which is Better?

. Rapid generation of LDO variants

Raspberry Pi 4 adds USB-C with Bluetooth 5 and 4Kp60 video

By Nick Flaherty

The Raspberry Pi 4 has gone on sale, offering a range of memory options.

The $35 board has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU from Broadcom, the BCM2711, providing three times the performance of the previous version, with 1GB, 2GB, or 4GB of LPDDR4 SDRAM. There is full-throughput Gigabit Ethernet with dual-band 802.11ac wireless networking and Bluetooth 5.0 as well as two USB 3.0 and two USB 2.0 ports.
There is dual monitor support at resolutions up to 4K with VideoCore VI graphics, supporting OpenGL ES 3.x, 4Kp60 hardware decode of HEVC video and complete compatibility with earlier Raspberry Pi products.

The 1GB costs $35, with the 2GB at $45 and 4GB for $55. This is the first time that the raspberry Pi Foundation has offered different memory option.

The form factor has been updated with USB-C for the power connector that  supports an extra 500mA of current, ensuring  a full 1.2A for downstream USB devices, even under heavy CPU load.

To accommodate dual display output within the existing board footprint, the type-A (full-size) HDMI connector has been replaced with a pair of type-D (micro) HDMI connectors. The Gigabit Ethernet magjack has moved to the top right of the board, from the bottom right, greatly simplifying PCB routing. The 4-pin Power-over-Ethernet (PoE) connector remains in the same location, so Raspberry Pi 4 remains compatible with the PoE HAT.

The Ethernet controller on the main SoC is connected to an external Broadcom PHY over a dedicated RGMII link, providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports.

All three connectors on the right-hand side of the board overhang the edge by an additional millimetre, with the aim of simplifying case design. In all other respects, the connector and mounting hole layout remains the same, ensuring compatibility with existing HATs and other accessories.

The board has a radically overhauled operating system, based on the forthcoming Debian 10 Buster release. This brings numerous behind-the-scenes technical improvements, along with an extensively modernised user interface, and updated applications including the Chromium 74 web browser. 

One notable step forward is that for Raspberry Pi 4 the legacy graphics driver stack used on previous models has been retired. Instead, the Mesa “V3D” driver developed by Eric Anholt at Broadcom over the last five years provides OpenGL-accelerated web browsing and desktop composition, and the ability to run 3D applications in a window under X. It also eliminates roughly half of the lines of closed-source code in the platform.

Connector and form-factor changes bring with them a requirement for new accessories. Kinneir Dufort and manufacturers T-Zero have developed an all-new two-part case, priced at $5.

Good, low-cost USB-C power supplies (and USB-C cables) are surprisingly hard to find, so Ktec has developed a suitable 5V/3A power supply; this is priced at $8, and is available in UK (type G), European (type C), North American (type A) and Australian (type I) plug formats.

Altair shows first NB-IoT at 450MHz using connected tractors

By Nick Flaherty

Altair Semiconductor has shown the first implementation of LTE CAT-M/NB-IoT running at 450MHz, giving lower power and longer range.
The ALT1250 chipset is powering the Asiatelco LM66 IoT module, which provides LTE connectivity for a range of connected devices, and has been developed to provide agricultural solutions for large-scale farming in Brazil.

Operating on the 450 MHz LTE spectrum and using Ericsson's LTE eNBs, Altair's cellular IoT chipset enables agricultural companies using smart sensors and connected tractors applications to optimize farming practices. The solution uses location tracking, driver behaviour monitoring, and fuel consumption optimization.
“Altair is looking to revolutionize cellular IoT connectivity across Brazil. The solution was field tested and demonstrated the potential of 450 MHz’s frequency reach in agriculture use cases. It will certainly enable an exciting range of transformative applications in rural areas”, said Paulo Bernardocki, Head of Solutions Radio, Ericsson LATAM South.

Altair's highly integrated dual-mode cellular IoT chipset, ALT1250, is the only available CAT-M and NB-IoT solution trialed to run on 450 MHz. The chipset supports Global LTE bands within a single hardware design and supports both satellite and cellular positioning tracking. The Altair-powered LM66 modules are capable of connecting and delivering data over the LTE450 network as well.

"There's a global trend resulting in a significant amount of 450 MHz spectrum being re-allocated for new use cases on LTE450 networks, as legacy 2G and 3G networks are being retired," said Igor Tovberg, Altair's Director of Product Marketing. "450 MHz is ideal for agricultural IoT applications, providing superior network coverage essential for the wide and often remote farming locations that exist in Brazil."

Altair is part of the Sony Group following its acquisition in early 2016.

Related NB-IoT stories:

Friday, June 07, 2019

First platform certified for 5nm design at Samsung

By Nick Flaherty

Samsung Electronics has certified the Synopsys Fusion Design Platform for its 5nm Low-Power Early (LPE) process with Extreme Ultraviolet (EUV) lithography technology. 

The AI-enhanced, cloud-ready Fusion Design Platform certified with the 64bit Arm Cortex-A53 and Cortex-A57 processors, which are based on the Armv8 architecture. This is a full flow for the next wave of semiconductor designs, including high-performance computing (HPC), automotive, 5G, and artificial intelligence (AI) market segments.

"Through our 7-nanometer product shipment and the successful completion of 5-nanometer process development, we've proven our capabilities in EUV-based nodes. Using the Synopsys Fusion Design Platform, our mutual customers will be able to design the most competitive 5LPE SoC products for the full entitled performance and low power applications," said JY Choi, vice president of Foundry Design Technology Team at Samsung Electronics. "Synopsys continues to be our vendor of choice for collaboration on new node development and enablement, so our foundry customers can confidently ramp their designs to volume production in all market segments, including automotive, AI, high-performance computing, and mobile."
The platform includes the Fusion Compiler RTL-to-GDSII Solution, along with IC Compiler II place-and-route with EUV single-exposure-based routing with optimized 5LPE design rule support, single fin variant-aware legalization, and via stapling to ensure maximum utilization while minimizing dynamic power

It also includes Design Compiler Graphical and Design Compiler NXT RTL synthesis for correlation, congestion reduction, pin access-aware optimization, 5LPE design rule support, and physical guidance for IC Compiler II 

The IC Validator physical signoff is a cloud-optimized physical signoff including DRC, LVS, and Fill. Innovative Explorer DRC and Live DRC technologies for enhanced productivity, while PrimeTime timing signoff provides near-threshold ultra-low voltage variation modeling, via variation modeling, and placement rule-aware engineering change order (ECO) guidance

StarRC parasitic extraction provides the EUV single pattern-based routing support, and new extraction technologies, such as coverage-based via resistance and vertical gate resistance modeling while an ANSYS RedHawk-driven EM/IR analysis and optimization within place-and-route.

For test, the TestMAX DFT and TestMAX ATPG test provide FinFET-based, cell-aware, and slack-based transition testing for higher test quality. Formality equivalence checking provides UPF-based equivalence checking with state transition verification

The platform is in active production usage at market-leading companies. The AI-enhanced platform boosts designer productivity by speeding up computation-intensive analyses and uses past learning to improve the results. It runs on major public cloud providers' and Synopsys-hosted infrastructure.

"Our long and successful collaboration with Samsung Foundry has enabled our mutual customers to adopt Synopsys' market-leading solutions early, certified on Samsung's most advanced node," said Sassine Ghazi, general manager of Synopsys' Design Group. "Combining the 5LPE benefits in power, performance, and gate density with the Synopsys Fusion Design Platform QoR and TTR advantages will enable our mutual customers to differentiate their next-generation products. Synopsys continues to focus on providing the best solutions for joint customers."

Thursday, May 30, 2019

Arduino shield adds AI capability for IoT developments

By Nick Flaherty

Lattice Semiconductor has launched a development kit for implementing AI in devices using vision and sound as sensory inputs. The HM01B0 UPduino Shield is a rapid prototyping board, in the Arduino form factor, with the components designers need to quickly develop always-on, low power smart IoT devices.

The $50 kit consists of a Lattice iCE40 UltraPlus FPGA-based Upduino 2.0 board and a Himax HM01B0 image sensor module.

Optimized for IoT devices and embedded AI applications, the iCE40 UltraPlus FPGA has power consumption as low as 75 µW in sleep mode and 1-10mA when active. It also supports I/O port configuration flexibility, including the ability to combine multiple signals for transmission over one port. This helps designers easily investigate and experiment with different designs and to accelerate product prototyping. Proof-of-concept demos such as Human Presence Detection and Hand Gesture Detection are included in this modular hardware platform to further simplify and accelerate vision-based AI systems.

“The Upduino Shield development kit offers substantial flexibility to product developers by making the addition of vision-based AI support to IoT-connected devices quick and easy,” said Peiju Chiang, Product Marketing Manager at Lattice Semiconductor. “And because the kit uses our low-power iCE UltraPlus FPGA, adding AI support to IoT products is possible without significant increase in product power consumption, a key requirement for IoT devices operating at the network Edge.”

Key features of the UPduino Shield development kit include:
  • UltraPlus FPGA with 5.3K LUTs, 1 Mb SPRAM, 120 Kb DPRAM, 8 Multipliers
  • FTDI FT232H USB to SPI Device for FPGA programming
  • 12 MHz Crystal Oscillator Clock Source
  • 34 GPIOs on 0.1” headers for connecting to the adapter board
  • SPI Flash, RGB LED, 3.3 V and 1.2 V voltage regulators
  • Can be used with the new Lattice Radiant Design Software 1.1

Monday, May 27, 2019

Siemens teams for cybersecurity ... UK battery centre boost ... Electric jet's first flight

Power news this week by Nick Flaherty at eeNews Europe

. Siemens teams with Alphabet on cybersecurity

. UK battery centre sees £28m boost

. Electric jet makes maiden flight


. Graphene battery sensor wins pan-European power competition

. Digital management modules boost circuit breaker performance

. High current shielded power inductors boost efficiency


. Flexible power meter range simplifies installation

. Single chip PMIC for UHD TV boosts output power

. High current shielded power inductors boost efficiency


. Analyze EMI problems with oscilloscopes from Rohde & Schwarz

. Forward or Flyback? Which is Better?

Microsoft joins energy harvesting alliance to create digital twins for the IoT

By Nick Flaherty

The EnOcean Alliance of companies around energy harvesting technologies has had a boost with the addition of Microsoft. This will allow the creation of digital twins of sensors and actuators within the Azure cloud service.

“As a member of the EnOcean Alliance, we encourage innovation and standardization in intelligent building control," said Thomas Frahler, Business Lead Internet of Things at Microsoft Germany. "We are empowering businesses to adapt digital technologies quickly and build their own digital competencies to offer new services to their customers by sharing our expertise as a technology leader, sharing experiences of our own transformation journey and providing advanced platforms and tools. With our IoT Platform we are simplifying the entry for companies into the Internet of Things, regardless where they currently are at and independent of cloud, software or devices.”

“IoT brings us huge opportunities to improve all of our lives including comfort, security, energy efficiency and cost savings," said Graham Martin, Chairman and CEO of the EnOcean Alliance. "To do this in buildings we need to digitalize building spaces to provide the necessary data required as well as powerful AI analytic and representation tools. I see this therefore as a perfect marriage with EnOcean wireless and maintenance free interoperable sensors from multi-vendors providing the necessary data and Microsoft offering the perfect platform solutions to analyze and optimize our buildings. We are very excited to welcome Microsoft as a new board and promoter member of the EnOcean Alliance. With Microsoft’s long-standing expertise in cloud-based services and IoT, we have gained a very strong partner in the ecosystem. We are looking forward to a successful cooperation to build the future of IoT,” 

There are 5.000 interoperable multi-vendor sensors and actuators for intelligent buildings from the 400 members of the Alliance. These are used in over 1.000.000 buildings worldwide and the energy harvesting approach means the sensors are easy to install and are maintenance free, optimizing the use of buildings, creating new service models and making buildings more flexible and more energy-efficient.

Microsoft offers the Azure Digital Twins, as the IoT platform that enables comprehensive models of the physical environment and its relationship with interaction between people, places and devices. First proof of concepts are currently in progress.
The EnOcean Alliance aims to internationalize energy harvesting wireless technology around the nternational standard ISO/IEC 14543-3-1X for wireless solutions with ultra-low power consumption and energy harvesting. 

Thursday, May 23, 2019

Superconductivity moves towards room temperature

By Nick Flaherty

Researchers in the US have shown superconductivity at the highest temperatures ever recorded.

The team at the Argonne National Laboratory found a material with superconductivity at temperatures of about -23 degrees Centigrade, a jump of about 50 degrees compared to the previous confirmed record.

Though the superconductivity happened under extremely high pressure, the result still represents a big step toward creating superconductivity at room temperature--the ultimate goal for scientists to be able to use this phenomenon for advanced technologies. 

Just as a copper wire conducts electricity better than a rubber tube, certain kinds of materials are better at becoming superconductive, a state defined by two main properties: The material offers zero resistance to electrical current and cannot be penetrated by magnetic fields. The potential uses for this are as vast as they are exciting: electrical wires without diminishing currents, extremely fast supercomputers and efficient magnetic levitation trains.

Recent theoretical predictions have shown that a new class of materials of superconducting hydrides could pave the way for higher-temperature superconductivity. Researchers at the Max Planck Institute for Chemistry in Germany teamed up with University of Chicago researchers to create one of these materials, called lanthanum superhydrides, test its superconductivity, and determine its structure and composition.

The only catch was that the material needed to be placed under extremely high pressure--between 150 and 170 gigapascals, more than one and a half million times the pressure at sea level. Only under these high-pressure conditions did the material--a tiny sample only a few microns across--exhibit superconductivity at the new record temperature.

The material showed three of the four characteristics needed to prove superconductivity: It dropped its electrical resistance, decreased its critical temperature under an external magnetic field and showed a temperature change when some elements were replaced with different isotopes. The fourth characteristic, called the Meissner effect, in which the material expels any magnetic field, was not detected. That's because the material is so small that this effect could not be observed, researchers said.

They used the Advanced Photon Source at Argonne National Laboratory, which provides ultra-bright, high-energy X-ray beams that have enabled breakthroughs in everything from better batteries to understanding the Earth's deep interior, to analyze the material. In the experiment, researchers within University of Chicago's Center for Advanced Radiation Sources squeezed a tiny sample of the material between two tiny diamonds to exert the pressure needed, then used the beamline's X-rays to probe its structure and composition.

Because the temperatures used to conduct the experiment is within the normal range of many places in the world, that makes the ultimate goal of room temperature--or at least 0 degrees Celsius--seem within reach.

Tuesday, May 21, 2019

Achronix turns to network-on-chip for AI accelerators in 7nm FPGA

By Nick Flaherty

Achronix Semiconductor has launched its latest FPGA family aimed at artificial intelligence, machine learning and high-bandwidth data acceleration applications. 

The Achronix Speedster7t family is based on a new architecture that is optimised for high-bandwidth workloads with a 2D network-on-chip (NoC), and a high-density array of new machine learning processors (MLPs) blocks optimised for high-bandwidth and AI/ML workloads. This blending of FPGA programmability and ASIC routing structures and compute engines boosts performance.

“The growth potential for AI/ML is astounding, and the use cases are rapidly evolving, and we are offering a new solution to address the varying requirements of high performance, flexibility and time to market,” said Robert Blake, president and CEO of Achronix Semiconductor. “Our Speedster7t family breaks new ground as the first solution to deliver FPGA adaptability with ASIC-like performance. We believe our new ‘FPGA+’ class of technology truly pushes the boundaries in the high-performance market.”
Manufactured on TSMC’s 7nm FinFET process, Speedster7t devices are designed to accept massive amounts of data from multiple high-speed sources, distribute that data to programmable on-chip algorithmic and processing units, and then deliver those results with the lowest possible latency. They include high-bandwidth GDDR6 interfaces, 400G Ethernet ports, and PCI Express Gen5 — all interconnected to deliver ASIC-level bandwidth while retaining the full programmability of FPGAs.

“The new Achronix Speedster7t FPGA family is a prime example of the explosion of innovative silicon architectures created to handle massive amounts of data that are aimed directly at AI applications”, said Rich Wawrzyniak, principal market analyst for ASIC and SoC at Semico Research Corp. “Combining math functions, memory and programmability into their machine learning processor, combined with the cross chip, two-dimensional NOC structure, is a brilliant method of eliminating bottlenecks and ensuring the free flow of data throughout the device. In AI/ML applications, memory bandwidth is everything and the Achronix Speedster7t delivers impressive performance metrics in this area.” Semico’s forecast shows the market size for FPGAs in AI applications will grow by 3x in the next four years to over $4.8B.

The massively parallel array of programmable compute elements within the new machine learning processors (MLPs) are highly configurable, compute-intensive blocks that support integer formats from 4 to 24 bits and efficient floating-point modes including direct support for TensorFlow’s 16-bit format as well as a block floating-point format that doubles the compute engines per MLP.

The MLPs are tightly coupled with embedded memory blocks, eliminating the traditional delays associated with FPGA routing to ensure that data is delivered to the MLPs at the maximum performance of 750 MHz. This combination of high-density compute and high-performance data delivery results in a processor fabric that delivers the highest usable FPGA-based tera- operations (TOps) per second.

The family includes GDDR6 high speed memory controllers capable of supporting 512 Gbps of bandwidth, the up to 8 GDDR6 controllers in a Speedster7t device can support an aggregate GDDR6 bandwidth of 4 Tbps, delivering the equivalent memory bandwidth of an HBM-based FPGA at a fraction of the cost.

Along with this memory bandwidth, Speedster7t devices include the industry’s highest performance interface ports to support extremely high-bandwidth data streams. Speedster7t devices have up to 72 of the industry’s highest performance SerDes that can operate from 1 to 112 Gbps plus hard 400G Ethernet MACs with forward error correction (FEC), supporting 4x 100G and 8x 50G configurations, plus hard PCI Express Gen5 controllers with 8 or 16 lanes per controller.

The 2D NoC spans horizontally and vertically over the FPGA fabric, connecting to all of the FPGA’s high-speed data and memory interfaces. Each row or column in the NoC is implemented as two 256-bit, unidirectional industry-standard AXI channels operating at 2 GHz, delivering 512 Gbps of data traffic in each direction simultaneously.

Most importantly, the NoC eliminates the congestion and performance bottlenecks that occur in traditional FPGAs that use the programmable routing and logic lookup table (LUT) resources to move data streams throughout the FPGA. This high-performance network not only increases the overall bandwidth capacity of Speedster7t FPGAs, but also increases the effective LUT capacity while reducing power.

The FPGAs include bitstream security features with multiple layers of defence for protecting bitstream secrecy and integrity. Keys are encrypted based on a tamper-resistant physically unclonable function (PUF), and bitstreams are encrypted and authenticated by 256-bit AES-GCM. To defend against side-channel attacks, bitstreams are segmented, with separately derived keys are used for each segment, and the decryption hardware employs differential power analysis (DPA) counter measures. A 2048-bit RSA public key authentication protocol is used to activate the decryption and authentication hardware. 

The Speedster7t FPGA devices range from 363K to 2.6M 6-input LUTs. The first devices and development boards for evaluation will be available in Q4 2019.