AMTSO publishes guidelines for testing of IoT security products
San Francisco, United States – AMTSO, the cybersecurity industry’s testing standard community, announced it has published guidelines for testing of IoT security products. Comprised of input from testers and vendors, the guidelines cover principles for the testing of IoT security products providing recommendations on test environment, sample selection, testing of specific security functionality, and performance benchmarking for testers.
“There isn’t much information and guidance available yet for the testing of IoT security solutions as it represents a relatively new category. However, independent benchmarking and certification of offerings in this space is needed to create benchmarks for users”, says Vlad Iliushin, board member at AMTSO. “The testing of IoT security solutions is quite different from anti-malware testing as they need to protect a huge variety of different smart devices in businesses and homes, so the setup of the test environment can be challenging. Also, as smart devices mostly are primarily run on Linux, testers have to use specific threat samples that these devices are vulnerable to in order to make their evaluations relevant. With our guidelines, we addressed these particularities, hoping that they provide valuable guidance that can set the direction in fair IoT security testing.”
The guidelines for testing of IoT security products include the following sections:
• General principles: All tests and benchmarks should focus on validating the end result and performance of protection delivered, instead of how the product functions on the backend. Thus, the guidelines suggest that no difference in rating should be made between products that use, for example, machine learning or manufacturer usage descriptions as long as the outcome is the same.
• Sample selection: The guidelines provide guidance for challenges with choosing the right samples for IoT security solution benchmarking. For a relevant test, testers need to select samples that are still active, and that actually target the operating systems smart devices are running on. The guidelines also suggest that ideally, the samples could be categorised between industrial and non-industrial, with further separation into operating systems, CPU architectures, and severity scores.
• Determination of “detection”: IoT security solutions work very differently than traditional cybersecurity products when it comes to detections and actions taken, for example, some solutions will simply detect and prevent a threat without notifying the user. The guidelines suggest to use threats with admin consoles that can be controlled by the tester, or to use devices where the attack will be visible if conducted. Another alternative could be observing the device ‘under attack’ via network sniffing.
• Test environment: In an ideal case, all tests and benchmarks would be executed in a controllable environment using real devices. However, the setup can be complex, and if the tester decides against using real devices in the testing environment, it is advised that they should validate their approach by running their desired scenario with the security functionality of the security device disabled and checking the attack execution and success. The guidelines also give advice on using alternatives to real devices, like a Raspberry Pi, to mimic a real IoT device, and creating bespoke IoT malware samples, like Mirai, for testing of malware never seen before.
• Testing of specific security functionality: The guidelines embrace advice on different attack stages, including reconnaissance, initial access, and execution. They outline the possibility to test each stage individually vs. going through the whole attack at the same time. Choices on this should be documented in the testing methodology. Also, the guidelines suggest platform-agnostic testing to be considered as many threats today target multiple architectures and can be used for IoT and non-IoT devices alike.
• Performance benchmarking: The guidelines also provide considerations on performance benchmarking, e.g. suggesting to differentiate between various use cases such as consumers vs. businesses, or the criticality of latency or reduced throughput per protocol, which depends on its purpose.
The guidelines were created by AMTSO’s IoT work group, with the following contributors:
• Vladislav Iliushin, VI Labs, formerly Avast
• Ilya Naumov, Kaspersky
• Andrey Kiryukhin, Kaspersky
• Evgeny Vovk, Kaspersky
• Armin Wasicek, Avast
• Stefan Dumitrascu, SE Labs
• John Hawes, AMTSO
• Scott Jeffreys, AMTSO
The guidelines were approved by the AMTSO membership in June 2022.
Guidelines for testing of IoT security products, other guidelines, and standard documents are available for download at AMTSO
Comment on this article below or via Twitter @IoTGN