All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Thursday, August 31, 2017

First LTE narrowband IoT smart street lighting system deployed in Greece

By Nick Flaherty

The world's first LTE-based narrowband IoT (NB‑IoT) connected smart street lighting control system is rolling out in a pilot project on the OTE (Telekom) network in Patras, Greece.

The lights use the nteliLIGHT technology from Flashnet in Romania with the SARA-N2 series of NB‑IoT (LTE Cat NB1) modules from u-blox.

While this is the first LTE Cat NB1 system, there other narrowband smart street lighting systems being deployed using LPWAN systems, proprietary protocols or Echelon's LONworks.
With hundreds of existing projects deployed worldwide, InteliLIGHT’s smart street lighting remote management software uses a range of communication protocols. Having identified NB‑IoT as strategically important, InteliLIGHT chose to partner with u‑blox on the development of its FRE‑220‑NB range of NB‑IoT compatible luminaire controllers.
The new family of controllers can be embedded into most luminaire designs and enables the individual remote control (on/off, dimming) of LED street lights with electronic ballasts up to 400W. The smart control offered extends to monitoring a wide range of electrical parameters, the ability for Over The Air (OTA) updates and support for autonomous operation.

The NB‑IoT protocol targets IoT applications that have low bandwidth requirements, as such it is ideal for smart lighting. As part of the LTE family of standards, NB‑IoT can be supported within existing LTE infrastructure, offering carrier-grade reliability and security, as well as excellent penetration and stability. u‑blox’s SARA‑N2 series was the world’s first NB‑IoT module and successfully combines ultra low power consumption with an extended temperature range in a small LGA (Land Grid Array) form factor.
“The connected street lighting industry already uses a wide range of communication protocols,” commented Lorand Mozes, CEO at Flashnet. “We believe the industry is ready to adopt NB‑IoT and thanks to our work with u-blox we are the first supplier able to offer a robust and fully featured product line-up. By selecting the SARA‑N2 modules from u‑blox we have been able to bring our FRE‑220NB range of NB‑IoT connected luminaires to market in a very short period of time, to complement our existing range and offer our customers even greater choice.”

“Smart cities will be empowered by NB‑IoT,” said Samuele Falcomer, Product Manager Cellular at u‑blox. “Working with InteliLIGHT to help bring the FRE‑220NB range of luminaire controllers to market demonstrates that the SARA‑N2 series of modules supports applications across all industries in benefiting from NB‑IoT connectivity.”

Current Flashnet pilot projects are underway worldwide to prove the reliability of NB-IoT technology with plans to turn them into large scale street lighting control implementations. Considering the market potential and the evolution of connected street lighting, Flashnet is expecting to sell a few hundred thousand controllers within 3-5 years.

For more information on the Greek NB-IoT smart city pilot see

Related stories:

Wednesday, August 30, 2017

Design reduces size of antennas by 100x

By Nick Flaherty

Researchers at Northeastern University in the US have developed a technique that can produce antennas that are 10 to 100 times smaller than today's system, bringing the prospect of dramatically smaller embedded systems for the Internet of Things, wearable and medical designs

In a paper published in Nature Communications, Nian Sun, professor of electrical and computer engineering at Northeastern and colleagues describe a new approach to designing antennas. This enables researchers to construct antennas that are up to a hundred times smaller than currently available antennas, said Sun.

"A lot of people have tried hard to reduce the size of antennas. This has been an open challenge for the whole society," he said. "We looked into this problem and thought, 'why don't we use a new mechanism?'"

Traditional antennas are restricted by the wavelength of the RF signal, and while techniques such as fractal designs have helped to reduce the area of the antenna, progress has been slow. 

Instead of designing antennas at the electromagnetic wave resonance, the researchers tailored the antennas to acoustic resonance. Acoustic resonance waves are roughly 10 thousand times smaller than electromagnetic waves. This translates to an antenna one to two orders of magnitude - 100 times - smaller.

Since acoustic resonance and electromagnetic waves have the same frequency, the new antennas would still work for cell phones and other wireless communication devices. The researchers found their antennas performed better than traditional kinds.

One such application that neurosurgeons are interested in exploring is a device that could sense neuron behaviour deep in the brain. But bringing this idea to life has stumped researchers, until now. "Something that's millimeters or even micrometers in size would make biomedical implantation much easier to achieve, and the tissue damage would be much less," said Sun.

Tuesday, August 29, 2017

Intel goes big on AI at the edge

By Nick Flaherty

Intel aims to dominate embedded AI from the network edge to the data centre

The Myriad X vision processor developed by its Movidius subsidiary is world’s first system-on-chip (SOC) shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The Neural Compute Engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time. 

The chip is capable of 1 TOPS (trillion operations per second) of compute performance on deep neural network inferences, alongside a total of 4TOPS of vision processing within a 1.5W power footrpint for edge applications.

This follows the use of the Intel Stratix 10 FPGAs for real time AI in the data centre and competitor Qualcomm's recent purchase of Scyfer in Amsterdam.
“We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day,” said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. “Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices.”

In addition to its Neural Compute Engine, Myriad X combines imaging, visual processing and deep learning inference in real time with 16 programmable 128-bit VLIW vector processors that run multiple imaging and vision application pipelines simultaneously.

The 16nm chip supports 16 configurable MIPI Lanes that connect up to 8 HD resolution RGB cameras directly to Myriad X to support up to 700 million pixels per second of image signal processing throughput.

There are also 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.

Related stories:

u-blox teams with Atoll for LPWAN across India’s smart cities

By Nick Flaherty

Swiss module maker u‑blox has teamed up with Indian IoT gateway platform, sensor node and wireless module provider Atoll Solutions to roll out an IoT starter kit based on LTE Cat M1 and Narrowband IoT (NB-IoT).

With an extensive Smart City program already announced, India’s infrastructure is moving towards low-power wide-area (LPWA) technologies to enable Smart Street Lighting and Smart Metering. The starter kit provides a development platform for nodes and gateways based on LTE Cat M1 and NB-IoT.

Supporting both the NB-IoT (LTE Cat NB1) SARA-N2 and LTE Cat M1/NB1 SARA‑R4 module series from u‑blox, the starter kit will enable the rapid development of IoT solutions, ready for deployment in a Smart Grid. Future development will include complete reference designs incorporating other u‑blox solutions, including Bluetooth Low Energy and GNSS modules.
“Many leading cellular operators in India are already offering LTE Cat M1 and NB‑IoT support, with many more about to follow,” said Rado Sustersic, Product Manager, Product Center Cellular at u-blox. “This will help accelerate the adoption of low-power wide-area (LPWA) technologies for Smart Cities and provide OEMs with the tools they need to get to market quickly.”

“Narrowband LTE has the power to add affordable and reliable connectivity to a wide range of assets, creating truly Smart Cities through Smart Lighting and Smart Metering”, said Jithu Niruthambath, Founder and CEO of Atoll Solutions. “We believe this starter kit will enable our customers to build and deploy scalable solutions in the IoT with ease and confidence.”

Related stories:

Monday, August 28, 2017

Power news this week

. Seiko Instruments chip arm changes name and owner

. Osram buys US IoT software developer for subscription business

. UK grid can't cope with mass electric vehicles says report

. Solar eclipse doesn't phase power grid

. Breakthrough for magnesium batteries

. Polystyrene microgel cuts cost of perovskite solar cells at Manchester
. Chinese researchers turn steel yarn into batteries

. Exeter startup embeds solar cells and focusing optics into construction glass blocks

. 15 and 30W open frame power supplies aim at medical applications

. Gate driver photocoupler drives medium power IGBTs and power MOSFETs

. Power management is key to 10nm 48 core ARM server chip


Embedding OLEDs in fabric for wearable displays

By Nick Flaherty

A research team led by Professor Kyung Cheol Choi at the School of Electrical Engineering at KAIST in South Korea has developed wearable OLED (organic light-emitting diode) displays for various applications including fashion, IT, and healthcare.

The team used two different approaches, fabric-type and fibre-type, for clothing-shaped wearable displays. In 2015, the team successfully laminated a thin planarization sheet thermally onto fabric to form a surface that is compatible with the OLEDs approximately 200nm thick. Also, the team reported their research outcomes on enhancing the reliability of operating fibre-based OLEDs. In 2016, the team introduced a dip-coating method, capable of uniformly depositing layers, to develop polymer light-emitting diodes, which show high luminance even on thin fabric.

Working with local materials company KOLON Glotech, Seungyeop Choi used the resesearch to develop the fabric-based OLEDs, showing high luminance and efficiency while maintaining the flexibility of the fabric.

The long-term reliability of this wearable device that has the world's best electrical and optical characteristics was verified through their self-developed, organic and inorganic encapsulation technology. According to the team, their wearable device facilitates the operation of OLEDs even at a bending radius of 2mm.

"Having wavy structures and empty spaces, fibre plays a significant role in lowering the mechanical stress on the OLEDs," said Prof Choi. "Screen displayed on our daily clothing is no longer a future technology. Light-emitting clothes will have considerable influence on not only the e-textile industry but also the automobile and healthcare industries."

The team believes these OLEDs have the world's best luminance and efficiency and are the most flexible fabric-based light-emitting device among those reported.

Related stories:

Qualcomm boosts its embedded AI development with Scyfer buy

By Nick Flaherty

Qualcomm Technologies has bought an AI spin out from the University of Amsterdam to boost its embedded AI capabilities, including new hardware designs.

Scyfer has built AI systems for companies around the world and in a number of different industries, such as manufacturing, healthcare and finance. There is increasing interest in embedded AI capabilities in devices.

“We started fundamental research a decade ago, and our current products now support many AI use cases from computer vision and natural language processing to malware detection on a variety of devices — such as smartphones and cars — and we are researching broader topics, such as AI for wireless connectivity, power management and photography,” said Matt Grob, executive vice president, technology, Qualcomm Incorporated.

Qualcomm Technologies is focused on the implementation of AI on end devices – smartphones, cars, robotics, and the like – to ensure that processing can be done with or without a network or Wi-Fi connection. This includes network optimisation for on-device applications including compression, inter-layer optimisations, optimisations for sparsity, and other techniques to take better advantage of memory and space/time complexity, as well as specialised hardware architectures designed to accelerate machine learning workloads with greater performance and energy efficiency in embedded devices.

The acquisition of Scyfer brings with it a founder and renowned professor at the University of Amsterdam, Dr. Max Welling, which will help to further advance AI research and development at Qualcomm Technologies. Dr. Welling will continue his role as a professor at the University of Amsterdam, and the rest of the Scyfer team will continue to be based in Amsterdam. 

In 2015, Qualcomm Technologies and the University of Amsterdam also established QUVA, a joint research lab focused on advancing the cutting-edge machine learning techniques for mobile and computer vision, and the company says it will continue to work with the University of Amsterdam going forward.

Saturday, August 26, 2017

Supermicro puts 18 Million IOPS of Storage in 2U hot swap SSD server

By Nick Flaherty

Supermicro has designed 1U and 2U servers that support 20 Hot-Swap NVMe SSDs with non-blocking Gen 3 PCI-E x4 direct connections with up to 18m operations per second. 

A 1U JBOF (Just a Bunch of Flash) server supports 32 hot-swap NVMe drives 

"To achieve the lowest possible latency, Supermicro's new all-flash 1U and 2U Ultra servers are designed to support 20 directly attached hot-swap NVMe SSDs," said Charles Liang, President and CEO of Supermicro. "These new X11 servers feature a non-blocking design, allocating 80 PCI-E lanes to the 20 NVMe SSDs for Gen 3 PCI-E x4 direct connections that achieve maximum storage performance."

The X11 Ultra servers fully support the high end Intel Xeon Scalable processors up to 205 W and 24 DIMMs making these servers an excellent choice for high-performance analytics and in-memory application acceleration. The system architecture is balanced to make optimal use of system resources with each processor supporting 10 NVMe drives and dual 25G ports or a 100G port.

Friday, August 25, 2017

AI is more important than big data for IoT says survey

By Nick Flaherty

A recent survey of 1,000 IoT professionals by GlobalData has shown a heavy reliance on traditional business intelligence (BI) software. 40% of those surveyed ranked business intelligence platforms well above all other means of analysing data. 

Using AI at the edge is a more effective way to addressing large data problems and ensuring business continuity, says the report. This is a key trend that has been followed by the Embedded blog. 

The trend toward distributed data means these all-in-one BI software platforms have already given way to numerous smaller, more discrete ways of deriving value from enterprise data, be that a direct SQL query, a predictive data modeller, an auto-generated data discovery visualisation, or a live, interactive executive dashboard. 

This reluctance to follow the broader market away from BI platforms within IoT is concerning says GlobalData, given the problems when, during its lifecycle, an IoT deployment fails. In 2016, no failures were noted post-deployment. In 2017, however, that number shot up to 12%.

“With deployment and maintenance costs also topping our survey as the number one reason IoT deployments fail or are abandoned prior to deployment,” said Brad Shimmin, Service Director for Global IT Technology and Software at GlobalData. “It becomes clear, therefore, that IoT practitioners should emphasize tactical benefits over strategic analytical insights at least at the outset of a project as a means of proving ROI and securing future investment from the business.”

Artificial Intelligence (AI), however, can do far more than inform. It can immediately prove the value of IoT as a means of optimising existing business processes. With even the simplest machine learning (ML) framework and model at the ready, for example, IoT practitioners can solve two pressing problems: detecting anomalies and predicting desired outcomes.

GlobalData’s survey shows that enterprise buyers are eager to do just that with 43% indicating that the best role for AI is to centrally automate and optimise business processes.

The problem lies within the idea of centralisation. Centralization is part and parcel to traditional BI analysis and reporting and traditional ideas like predictive modeling. Where AI is most valuable, however is at the edge. IoT deployments need to employ tools like ML, not centrally, but at the edge, close to the device itself. And like today’s enterprise software, those analytics endeavours should be brief and to the point, and focused on solving specific challenges.

This approach will not solve the full set of potential problems, but it is affordable and will have a direct and immediate impact on businesses, helping to prove the value of IoT one problem at a time.

VIA teams with Microsoft Azure for pre-certified IoT hardware

By Nick Flaherty

VIA Technologies has joined Microsoft Azure's Certified for Internet of Things (IoT) programme for hardware and software that has been pre-tested and verified to work with Microsoft Azure IoT services. 

VIA has developed a portfolio of embedded platform and systems for a variety of Enterprise IoT and Smart City applications, including energy management, industrial automation, in-vehicle transportation, healthcare, building automation, and digital signage. The verified hardware platforms facilitate the integration of sophisticated cloud-based applications.

“Microsoft Azure Certified for IoT validates our ability to jumpstart customers’ IoT projects with pre-tested device and operating system combinations,” said Richard Brown, Vice-President of International Marketing at VIA Technologies. “Decreasing the usual customization and work required for compatibility ensures VIA helps customers get started quickly on their IoT solution.”

“Microsoft Azure Certified for IoT extends our promise to bring IoT to business scale, starting with interoperable solutions from leading technology companies around the world,” said Jerry Lee, Director of Marketing for Azure Internet of Things at Microsoft. “With trusted offerings and verified partners, Microsoft Azure Certified for IoT accelerates the deployment of IoT even further.”

Details are at Azure Certified for IoT and the Azure IoT Suite.

Related stories:

Microsoft uses Stratix FPGAs for real time AI engine

By Nick Flaherty

Microsoft has developed a real time engine for artificial intelligence using the latest 14nm Stratix 10 field programmable devices from Intel. The technique removes software from the deep learning loop to reduce the latency.

Project Brainwave was disclosed at this week's Hot Chips conference as a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. It was designed for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.

Competitors Xilinx in FPGAs and NVIDIA in graphics processing units have both been accelerating these AI deep learning algorithms into embedded and IoT applications

The Project Brainwave system is built with three main layers:
  • A high-performance, distributed system architecture;
  • A hardware deep neural network (DNN) engine synthesized onto FPGAs; and
  • A compiler and runtime for low-friction deployment of trained models.
First, Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years in the Azure platform. By attaching the high-performance FPGAs directly to the datacentre network, the DNNs can operate as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.

Project Brainwave also uses a powerful “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs, rather than than a hardened, fixed DPU. Although some of these chips have high peak performance, they must choose their operators and data types at design time, which limits their flexibility. Project Brainwave takes a different approach, providing a design that scales across a range of data types, with the desired data type being a synthesis-time decision. 

 The design combines both the ASIC digital signal processing blocks on the FPGAs and the synthesizable logic to provide a greater and more optimised number of functional units. This exploits the FPGA’s flexibility with highly customised, narrow-precision data types that increase performance without real losses in model accuracy. It also also allows research to be included quickly thorugh the synthesisable elements, with updates in a matter of weeks.

The AI incorporates a software stack designed to support the wide range of popular deep learning frameworks. But Microsoft has developed a graph-based intermediate representation that converts models trained in the popular frameworks, and allows them to be compiled down to the FPGA infrastructure.

Even on early Stratix 10 silicon, the ported Project Brainwave system ran a large GRU (gated recurrent unit) model—five times larger than Resnet-50 that Microsoft showed a few years ago—with no batching, and achieved record-setting performance. 

 The demo used Microsoft’s custom 8-bit floating point format (“ms-fp8”), which does not suffer accuracy losses (on average) across a range of models, running at 39.5 Teraflops on this large GRU, running each request in under one millisecond. At that level of performance, the Brainwave architecture sustains execution of over 130,000 compute operations per cycle, driven by one macro-instruction being issued each 10 cycles. 

As the system is tuned over the next few quarters, the team expects significant further performance improvements.

Microsoft shortly plans to detail how Azure customers will be able to run their most complex deep learning models at this level of performance. 

Related stories:

Arkessa teams with telent for German LPWAN IoT network

By Nick Flaherty

German low power wireless network operator telent is working with IoT software provider Arkessa to roll out LoRaWAN nodes in the Energy, Transport, Industry 4.0 and Smart City sectors.

Netzikon, a public German LoRaWAN network operator and subsidiary of telent, will use Arkessa's connectivity management, network roaming and localization solutions such as eUICC to deliver digital services for the connection and networking of intelligent equipment units based on the LoRaWAN radio technology.

"The partnership between Netzikon, telent and Arkessa, all members of the LoRa Alliance, demonstrates a commitment to simplifying and future-proofing Enterprise IoT. Together we are helping global organisations to deploy IoT solutions faster and more efficiently," said Andrew Orrock, CEO of Arkessa.

telent is a member of the euromicron group and provides networks and systems for critical infrastructures as well as operational and safety-related communication. 

Cambridge-based Arkessa connects IoT devices and services to the IoT, regardless of location, network operator or wireless technology. By aggregating multiple global mobile networks and technologies - cellular (2G, 3G, 4G), satellite and low power wide-area (LPWAN) - into a single service and management platform, Arkessa enables IoT devices to connect out-of-the-box and operate anywhere on the planet. This secure and future-proof service platform is easy to adopt, integrate and scale enabling Enterprises and OEMs to optimise design, manufacturing and logistics and focus on new revenue generating services.

Related stories:

Thursday, August 24, 2017

Osram buys Digital Lumens for IoT income

Lighting equipment maker Osram is to buy US developer Digital Lumens for its subscription-based cloud software for the industrial Internet of Things (IoT).

The Digital Lumens software platform can be used to run applications covering everything from intelligent lighting control, energy use, and security systems to the measurement of environmental parameters such as air quality, and customers pay a monthly service charge to access data that is continually recorded and analysed by their lighting management system.

This is very different from Osram's retail business of selling LED light bulbs, even if they are smart devices that can be controlled by a Bluetooth link to a smartphone app.

“The acquisition of Digital Lumens puts Osram in a strong position when it comes to offering future-focused digital solutions for the facilities management sector and IoT applications,” said Stefan Kampmann, Chief Technology Officer at OSRAM Licht. “By integrating software and sensors in a single platform, we will be able to give businesses a deeper insight into the environment within their buildings and their utilization of space. As a company that understands space, Osram is taking the next step in developing new business models that go beyond lighting. What’s more, the platform is also compatible with light products made by other manufacturers.”

Osram is already planning to integrate its existing digital services into the platform. This includes the navigation and location solution Einstone, which uses Bluetooth to send targeted offers to users’ smartphones, for example, when they are in retail environments.

Osram buys US IoT software developer for subscription business | EETE Power Management By Nick Flaherty

Related stories:

Chiplet developer zGlue emerges from stealth with two key partnerships

By Nick Flaherty

After two years of technology and product development and working with lead customers, Internet of Things (IoT) chiplet startup zGlue has launched with two key partnerships.
zGlue has developed  a new category of semiconductor/software-as-a-service (SaaS) company by providing an easy way to combine IoT chips of all kinds. This provides 10X better integration than traditional systems-on-chip (SoCs) says the company, with lower cost, greater system flexibility, minimal risk and faster time to market. 

zGlue has brought together software, design tools an integration platform with a smart fabric interposer.  Developers visit the ZiPlet Store to select and configure the features they need from chiplets – a range of proven chips provided by existing trusted vendors – and zGlue automatically generates validated options for their market-specific ZiP implementation on the zGlue Smart Fabric. 

The ZIP design tool then automatically generates both hardware and software development environments, enabling product teams to immediately begin validating the fit and function within the context of their designs. As a result, products can now go from idea to design to functional prototypes in a matter of days, and from prototypes to high volume production in a few weeks, without sacrificing differentiation, integration or cost effectiveness. 

It is working with BOE Technology Group, an IoT company in China providing intelligent interface products and services for information interaction and human health, as a customer and packaging giant ASE in Taiwan as a manufacturing partner.

“We view zGlue’s unique technology is developed to help simplify the application design of IoT devices, which we think is of high market value," said said Yao Xiangjun, CEO of the Smart System Business Group at BOE. "BOE also expects the future products to closely cooperate with zGlue in products and technologies, and we hope to rapidly introduce highly-integrated IoT products.” 

Growth in private data outpaces public internet to reach 5,000TB by 2020 - data

By Nick Flaherty

The capacity of private data exchange between businesses is projected to outpace the public Internet, growing at nearly twice the rate and comprising nearly six times the volume of global IP traffic by 2020, according to the first Global Interconnection Index.

The private internet of M2M data and file exchange will dwarf the public internet of the World Wide Web or the dark web. This Interconnection Bandwidth is expected to grow at a 45 percent CAGR to reach 5,000 Tbps by 2020, dwarfing Global IP traffic in both growth (24 percent) and volume (855 Tbps)1. It is also expected to grow faster than Multiprotocol Label Switching (MPLS), the legacy model of business connectivity, by a factor of 10 (45 percent to four percent).

The study by Equinix analysed the adoption profile of thousands of carrier-neutral co-location data centre providers.  As business models become increasingly digital, distributed and dependent on the real-time engagement of many more users, partners and service providers, the Index is a powerful new research tool that highlights how companies are translating digital transformation into action and creating entirely new ways of connecting with their customers, partners and supply chain.

The Index will provide an annual baseline to track, measure and forecast the growth of Interconnection Bandwidth, defined as the total capacity provisioned to privately and directly exchange traffic with a diverse set of counterparties and providers at distributed IT exchange points.

"Some of the greatest technology trends of our lifetime, including mobile, social, cloud and the explosion of data, are creating disruption on the scale of the Industrial Revolution," said Sara Baack, Chief Marketing Officer for Equinix. "In this new reality, it's a 'scale-or-fail' proposition and companies are succeeding by adopting Interconnection, locating their IT infrastructure in immediate proximity to an ecosystem of companies that gather to physically connect their networks to those of their customers and partners. Interconnection helps fuel digital transformation by supporting multicloud consumption at scale, improving network latency and performance, enabling greater operational control, and reducing security risk."

Interconnection Installed Bandwidth Capacity (Tbps)
Total Region

The Index also forecasts Interconnection Bandwidth by use case for both enterprises and service providers. The largest use case is associated with traditional IT deployment models, in which businesses connect to network providers as an intermediary path to reach business partners and customers. 

However, the fastest growing use case is enterprises connecting directly to a range of cloud and IT service providers, confirming the shift of IT infrastructure from centralised, enterprise-owned data centres to decentralised, physically dispersed multi-cloud environments. 

Telecommunications is projected to be the second largest segment, with the need to provide coverage in many new locations and support the proliferation of connected devices and sensors. The third largest segment is projected to be cloud and IT services.

Interconnection Installed Bandwidth Capacity (Tbps)
By Industry
Banking & Insurance
Cloud & IT Services