Microsoft and MITRE have developed a tool that works like an automated adversarial attack library for those that lack a deep background in machine learning or artificial intelligence, providing insights on how these attacks work and an opportunity to build defenses.
WHY IT MATTERS
AI algorithms are used in healthcare to analyze vast amounts of medical data to aid in clinical treatment decisions, develop personalized treatment, monitor patients remotely and improve the efficiency of clinical trials.
The new integration of MITRE and Microsoft attack knowledge can help healthcare cybersecurity specialists discover novel vulnerabilities within an end-to-end ML workflow, and develop countermeasures that prevent system exploitation.
The tool, Arsenal, uses the MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems framework, a knowledge base of adversary tactics, techniques and case studies for ML systems, and was built off of Microsoft’s Counterfit automation tool for AI system security testing.
ATLAS is based on real-world observations, ML red teams demonstrations and academic research.
Rather than research specific vulnerabilities within an ML system, cybersecurity specialists can use Arsenal to uncover the security threats that the system will encounter as part of an enterprise network, explained Charles Clancy, senior vice president and general manager at MITRE Labs, in the company’s announcement.
The Arsenal plugin enables CALDERA – a MITRE platform that can be used to create and automate specific adversary profiles – to access Microsoft’s Counterfit library and emulate adversarial attacks and behaviors.
“Bringing these tools together is a major win for the cybersecurity community because it provides insights into how adversarial machine learning attacks play out,” said Clancy.
“Working together to address potential security flaws with machine learning systems will help improve user trust and better enable these systems to have a positive impact on society,” he added.
THE LARGER TREND
Creating a robust end-to-end ML workflow to identify vulnerabilities in ML systems that are integrated into an enterprise network can be extraordinarily complex.
Many cybersecurity professionals across industries – including healthcare – do not truly understand how the different forms of AI work, said Ittai Dayan, CEO and cofounder of Rhino Health, which offers an AI platform.
Machine learning is a subfield of AI that focuses on the development of algorithms and statistical models that enable computers to improve their performance in a specific task, he told Healthcare IT News this week.
“For example, machine learning algorithms can be used to analyze vast amounts of medical data, such as electronic health records, to identify patterns and relationships that can inform the development of more effective treatments,” he said in the AI primer.
“Machine learning can also be used to develop predictive models that can help healthcare providers to anticipate patient outcomes and make more informed decisions.”
Because machine learning algorithms are designed to automatically improve their performance by learning from data, they can be leveraged for attacks by bad actors motivated by monetary gain, insurance fraud or even the appearance of favorable clinical trial outcomes.
In one study, a simulated cyberattack on diagnostic AI that used ML to analyze medical images was fooled by fake images.
“Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis,” said Shandong Wu, associate professor of radiology, biomedical informatics and bioengineering at University of Pittsburgh.
ON THE RECORD
“As the world looks to AI to positively change how organizations operate, it’s critical that steps are taken to help ensure the security of those AI and machine learning models that will empower the workforce to do more with less of a strain on time, budget and resources,” said Ram Shankar Siva Kumar, principal program manager for AI security at Microsoft, in a statement.
“We’re proud to have worked with MITRE and HuggingFace [AI community and ML platform] to give the security community the tools they need to help leverage AI in a more secure way.”
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article