interview AI agents allow cybercriminals and nation-state hackers to outsource the "janitorial-type work" needed to plan and carry out cyberattacks, according to Sherrod DeGrippo, Microsoft's GM of global threat intelligence. North Korea is taking advantage.
This includes tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure - which may not sound as thrilling as plotting and carrying out digital intrusions, but are real-world criminal use cases for agentic AI that should make threat hunters sit up and take notice.
"Agentic, automated reconnaissance against systems is something that is worth taking a look at," DeGrippo said during an interview with The Register. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity."
An attacker could do this manually, but it would take a lot more time than asking an agent to do it for them. It's "a great example of AI that can be used for regular, standard business purposes and can also be used by threat actors for malicious purposes," she said.
In a Friday blog, Microsoft says that this is one of the ways miscreants are using AI to improve the efficiency and productivity of their criminal operations, resulting in attacks that are better, bigger, and faster.
Infrastructure management is another area where AI agents come in handy, DeGrippo said. "We have always seen threat actors stand up the infrastructure, whether that means compromising existing legitimate infrastructure and using it for malicious purposes, or purchasing accounts and setting up their own infrastructure to launch threat campaigns," she said.
Microsoft Threat Intelligence has observed North Korea's Coral Sleet - one of the crews behind the fake IT worker scam - using development platforms to quickly create and manage their attack infrastructure at scale, allowing more rapid campaign staging, testing, and command-and-control operations, according to the Friday blog.
"From an agentic AI use case, this is very interesting because you can talk to your malicious infrastructure with natural language and convey your ideas just by expressing them," DeGrippo said.
Both uses save attackers time and effort, and also lower barriers for less technically savvy criminals, especially when it comes to building infrastructure that won't be detected by defenders.
"Threat actors will do what works, and they will do what gets them their objective easiest and fastest," DeGrippo said. "And so handing threat actors these really powerful tools is going to allow them to do more of that."
While Microsoft's threat intel team and other security researchers have documented attackers using agentic AI to generate malware, agents' code-writing skills can't yet rival those of humans, DeGrippo told us. But, she added, there are two parts to this use case.
"When we detect AI-generated or AI-enabled malware, traditionally, we have noticed that it's different from regular malware," she said. "It does have those hallmarks that when a human looks at the code, they can say, 'I think this was AI generated.'"
The second part, which involves malware that can call different AI functions and libraries, is the more interesting use, "and more sophisticated," according to DeGrippo.
"Anybody who has a software development background, regardless of if they're developing benign software or malicious software, is thinking about how to better enhance their workflows with AI," she said. "It doesn't matter if you're building the next SaaS CRM application, a phone app to manage your kids' soccer games, or malware that's intended to steal money or do espionage. Anyone developing any kind of code is thinking about how to use an AI assistant to do that." ®
Source: The register