GenAI Is Now Creating Malware Delivery Vehicles
September 25, 2024 Alex Woodie
As if the threat posed by human bad actors wasn’t enough to admins up at night, security professionals are now tasked with preparing defenses for AI actors, including malware created by generative AI. That’s the takeaway from a new report published this month by HP that documents GenAI-generated malware. IBM i security professionals should take note.
The September 2024 edition of the HP Wolf Security Threat Insights Report brought some concerning news around GenAI. While the HP Wolf Security team had seen convincing phishing lures created by GenAI, it hadn’t seen any actual malicious code written by GenAI before.
But that changed during the second quarter, when the HP Threat Research team identified a malware campaign spreading AsyncRAT using VBScript and JavaScript. AsyncRAT is an open-source remote access Trojan (RAT) project used for controlling victims’ computers. Because AsyncRAT is easily obtainable, all the malicious actor needs to do is develop an “infection chain to deliver and install the malware,” the HP Wolf Security research team explains.
And it would appear that GenAI is now helping create that infection chain, according the researchers, who began their investigation after discovering a malicious email attachment sent to a French user. Once they decrypted the file (expertly encrypted with AES) using a brute-force password attack, the researchers were surprised to discover that the VBScript and JavaScript inside was not only un-obfuscated, but the developer had left copious comments throughout the code. (And you thought it was hard to get legitimate developers to document their work.)
“Based on the scripts’ structure, consistent comments for each function and the choice of function names and variables, we think it’s highly likely that the attacker used GenAI to develop these scripts,” the researchers write. “The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.”
This appears to be the first documented case of GenAI being used to help spread malware in the wild (although researchers at Hyas had used GenAI to create malware called BlackMamba to bypass an EDR as an experiment back in 2023). Considering the widespread success that vendors like IBM are having with GenAI co-pilots to help with rote code-understanding and coding tasks, it was likely just a matter of time before someone used the tech for malicious ends rather than maintaining old COBOL code or whipping up some quick SQL.
GenAI seems destined to change the nature of the cybersecurity business permanently. For bad actors, the tech gives them a powerful tool to do things like craft realistic-looking phishing lures (emails, texts, etc.), create malicious bots that impersonate real humans, and build malicious co-pilots that develop tools for distributing malware.
The good news is security pros also get to use GenAI for things like combing through vast amounts of information to find anomalies that could indicate a security problem or functioning as a lower-level assistant to help with coding or administrative tasks.
The cat-and-mouse game in the cybersecurity business will never end, and thanks to GenAI, it’s likely to get more interesting. Security pros working on all platforms, including the IBM i, should keep an eye on the development of GenAI-generated malware.
The risk for IBM i shops, who have benefited from “security through obscurity” for years, may increase if the bad guys are forced to go after tougher, more obscure targets once the low-hanging fruit is eaten thanks to GenAI making it easier to pull off a ransomware attack. Considering how poorly many IBM i servers are configured, and the number of newly discovered security vulnerabilities in IBM i, that’s a scary proposition.
RELATED STORIES
Ethical Hackers Discuss Penetration Work On IBM i
Summer of IBM i Vulnerabilities