The scale of the Internet of Things (IoT) is giving quality assurance (QA) teams a real headache, not least for testing the security of the whole network.
US software developer Zephyr has been looking at the challenges of test strategies for IoT, as the multitude of platforms and devices make traditional approaches to testing unworkable.
Test automation has become a fixture of modern quality assurance, in large part because IT organizations now have to churn out applications as well as updates and patches quickly.
A 2015 online survey of 601 IT professionals, conducted by HP, found that 51 percent of respondents were "leaning toward agile," while 16 percent reported adhering to "pure agile" and another 24 percent were pursuing a hybrid (agile and waterfall) approach. Agile management, rather than waterfall, is emerging as the norm.
The most cited reasons for taking up agile included the desire to increase collaboration between siloed departments, improve software quality and shorten overall time to market. Indeed, automation provides simplification and scalability to offset the new features added in each agile sprint.
Test automation is a natural fit for the Internet of Things for a number of reasons:
IoT testing requires proper oversight of each of these layers, with sets of tests that can automatically model and vet a huge range of possible scenarios. For example, a tester may wish to simulate an IoT system that features a network connection to a gateway node and a wireless mesh. Many virtual networks may operate in tandem with a physical one to rapidly simulate hundreds or even thousands of IoT nodes. From there, testers can pay particular attention to matters such as emulating the long idle times of IoT sensors and seeing how to optimize their accompanying applications and services for metrics such as energy efficiency, connection strength and speed.
Why IoT must be both automated and holistic
The stakes for effective test automation are high with the IoT because organizations are not only performing software testing but also vetting an entire ecosystem of connections, devices and real-time scenarios. QA teams have already gotten a feel for this new normal in working with mobile devices, which by their nature encounter dynamic conditions such as fluctuating cellular connectivity. The IoT ups the ante by introducing myriad other sensors and services into the picture.
Accordingly, tests can not stop at simply validating the functionality of software within tightly controlled settings. QA teams could have a good understanding of the hardware and protocols in play but still need additional time and experience to understand how all of these moving parts behave in the volatile environment of the IoT. Lessons learned from working big data services and analytics could be useful here. That is, in addition to seeing if a system basically works, testers must also check to see if it is scalable and reliable under common as well as unusual circumstances.
A 2015 online survey of 601 IT professionals, conducted by HP, found that 51 percent of respondents were "leaning toward agile," while 16 percent reported adhering to "pure agile" and another 24 percent were pursuing a hybrid (agile and waterfall) approach. Agile management, rather than waterfall, is emerging as the norm.
The most cited reasons for taking up agile included the desire to increase collaboration between siloed departments, improve software quality and shorten overall time to market. Indeed, automation provides simplification and scalability to offset the new features added in each agile sprint.
Test automation is a natural fit for the Internet of Things for a number of reasons:
- IoT requires a complex set of testing requirements, combining Web and IT services (e.g., cloud-based applications) with those of traditional embedded systems
- The size of the IoT is unprecedented. IT research firm Gartner has estimated that there could be more than 26 billion devices connected to the IoT by 2020. This immense scale makes automation a no-brainer for running tests on everything from home thermostats to the sensors that might be placed in a refrigerated truck
- Connectivity and security require close attention in the IoT. A wide range of protocols including Wi-Fi, Bluetooth, 4G LTE, ZigBee, etc. are in play for the billions of new devices coming online. Testers must account for a variety of connectivity scenarios while also focusing on the vast attack surfaces available to cyber criminals.
IoT testing requires proper oversight of each of these layers, with sets of tests that can automatically model and vet a huge range of possible scenarios. For example, a tester may wish to simulate an IoT system that features a network connection to a gateway node and a wireless mesh. Many virtual networks may operate in tandem with a physical one to rapidly simulate hundreds or even thousands of IoT nodes. From there, testers can pay particular attention to matters such as emulating the long idle times of IoT sensors and seeing how to optimize their accompanying applications and services for metrics such as energy efficiency, connection strength and speed.
Why IoT must be both automated and holistic
The stakes for effective test automation are high with the IoT because organizations are not only performing software testing but also vetting an entire ecosystem of connections, devices and real-time scenarios. QA teams have already gotten a feel for this new normal in working with mobile devices, which by their nature encounter dynamic conditions such as fluctuating cellular connectivity. The IoT ups the ante by introducing myriad other sensors and services into the picture.
Accordingly, tests can not stop at simply validating the functionality of software within tightly controlled settings. QA teams could have a good understanding of the hardware and protocols in play but still need additional time and experience to understand how all of these moving parts behave in the volatile environment of the IoT. Lessons learned from working big data services and analytics could be useful here. That is, in addition to seeing if a system basically works, testers must also check to see if it is scalable and reliable under common as well as unusual circumstances.
No comments:
Post a Comment