Researchers from NIST and the University of Michigan developed a new security framework that deploys digital twins. Thanks to the real-time digital copy, anomalies can be detected more quickly. Machine learning ensures that data is correctly categorized. Although a cybersecurity expert still needs to do the final check if an attack is detected, machine learning must increasingly reduce that role.
The researchers from the National Institute of Standards and Technology (NIST) and the University of Michigan conducted research on the use of digital twins for cybersecurity to help experts. Manufacturers in the automotive, healthcare, aerospace and other sectors are increasingly making robots and production equipment accessible remotely.
Better to monitor, but also more vulnerable
Making data from these machines remotely accessible makes maintenance easier and allows for cyber-attacks to be detected earlier on. Only, the accessibility also makes the machine more vulnerable. According to the researchers, some cyberattacks are detailed and therefore difficult to distinguish from other system anomalies.
“I see that manufacturing cybersecurity strategies rely on the security of network traffic. And they don’t always help us see what’s going on in the machine of the process,” said Michael Pease of NIST and co-author of the study. “As a result, some OT cybersecurity strategies are like viewing the operation from the outside through a window. But when attackers are on the floor, they can’t see them.”
Digital twins connected in real time
Digital twins can offer an excellent solution here, according to the researchers. They are in fact closely connected to their physical counterparts, which allows them to collect data and run side by side in near real time. So, if the physical machine cannot be inspected when the machine is running, the digital one is the best alternative. The researchers have used this idea as the basis of the new framework.
“Because manufacturing processes produce such rich data sets — temperature, voltage, current — and they are so repetitive, there are opportunities to detect anomalies that stick out, including cyberattacks,” said Dawn Tilbury, a professor of mechanical engineering at the University of Michigan and study co-author. To investigate the framework, find solutions for a digital twin of a 3D printer and fired malfunctions at the device.
Machine learning is trained
Machine learning algorithms were trained on normal operational data. Then the normal data was compared with the signals coming out of the physical 3D printer. Then the system categorized irregularities to a planned anomaly, such as an issue with the printer’s fan cooling or a potential cyberthreat. If it was a potential threat, it was forwarded to a human cybersecurity expert to do the final check.
“The framework provides tools to systematically formalize the subject matter expert’s knowledge on anomaly detection. If the framework hasn’t seen a certain anomaly before, a subject matter expert can analyze the collected data to provide further insights to be integrated into and improve the system,” said lead-author Efe Balta, now a postdoctoral researcher at ETH Zurich.
Digital twins further explored
The models should get better since the cyber expert gives his conclusions back to the model and so, the role of the human expert getting smaller. The researchers say they still want to work on the framework. They want to explore how the framework responds to more varied and aggressive attacks, for example. They also want to apply the strategy to a printer park.
“With further research, this framework could be a huge win-win for both maintenance and monitoring for indications of compromised OT systems,” said Pease.
Read the entire research here.