Health News

AI and machine learning: a gift, and a curse, for cybersecurity

The Universal Health Services attack this past month has brought renewed attention to the threat of ransomware faced by health systems – and what hospitals can do to protect themselves against a similar incident.  

Security experts say that the attack, beyond being one of the most significant ransomware incidents in healthcare history, may also be emblematic of the ways machine learning and artificial intelligence are being leveraged by bad actors.

With some kinds of “early worms,” said Greg Foss, senior cybersecurity strategist at VMware Carbon Black, “we saw [cybercriminals] performing these automated actions, and taking information from their environment and using it to spread and pivot automatically; identifying information of value; and using that to exfiltrate.”

The complexity of performing these actions in a new environment relies on “using AI and ML at its core,” said Foss.

Once access is gained to a system, he continued, much malware doesn’t require much user interference. But although AI and ML can be used to compromise systems’ security, Foss said, they can also be used to defend it. 

“AI and ML are something that contributes to security in multiple different ways,” he said. “It’s not something that’s been explored, even until just recently.”

One effective strategy involves user and entity behavior analytics, said Foss: essentially when a system analyzes an individual’s typical behavior and flags deviations from that behavior.

For example, a human resource representative abruptly running commands on their host is abnormal behavior and might indicate a breach, he said.

AI and ML can also be used to detect subtle patterns of behavior among attackers, he said. Given that phishing emails often play on a would-be victim’s emotions – playing up the urgency of a message to compel someone to click on a link – Foss noted that automated sentiment analysis can help flag if a message seems abnormally angry.

He also noted that email structures themselves can be a so-called tell: Bad actors may rely on a go-to structure or template to try to provoke responses, even the content itself changes. 

Or, if someone is trying to siphon off earnings or medication – particularly relevant in a healthcare setting – AI and ML can help work in conjunction with a supply chain to point out aberrations.

Of course, Foss cautioned, AI isn’t a foolproof bulwark against attacks. It’s subject to the same biases as its creators, and “those little subtleties of how these algorithms work allow them to be poisoned as well,” he said. In other words, it, like other technology, can be a double-edged sword.

Layered security controls, robust email filtering solutions, data control and network visibility also play a vital role in keeping health systems safe. 

At the end of the day, human engineering is one of the most important tools: training employees to recognize suspicious behavior and implement strong security responses.

Using AI and ML “is only starting to scratch the surface,” he said.

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article