It was bound to happen sooner or later. For the first time ever, bug hunters used ChatGPT in a successful Pwn2Own exploit, helping the researchers to hack software used in industrial applications and win $20,000.
To be clear: the AI did not find the vulnerability or write and run the attack code. But its successful usage in the bug-hunting contest could be a harbinger of hacks to come.
"This is not interpreting the Rosetta Stone," Dustin Childs, head of threat awareness at Trend Micro's Zero Day Initiative (ZDI) told The Register.
"It is a first step towards something more. We don't think AI is the future of hacking, but it could certainly turn into a great assistant for when a researcher comes up against a piece of code they aren't familiar with or a defense they weren't expecting."
At last week's event in Miami, Claroty's Team82 asked ChatGPT to write a backend module for their remote code execution attack against the Softing edgeAggregator Siemens — software that provides connectivity at the interface between OT and IT in industrial applications.
The humans involved in the exploit, security researchers Noam Moshe and Uri Katz, identified the vulnerability in an OPC Unified Architecture (OPC UA) client. OPC UA is a machine-to-machine communication protocol that used in industrial automation.
After finding the bug, and building an OPC UA sever to test the exploit, the researchers asked ChatGPT to develop a short code snippet that they used as part of the exploit.
"Because we had to make a lot of modifications for our exploitation technique to work, we had to make many changes to existing open source OPC UA projects," Moshe and Katz told The Register. "Since we were not familiar with the specific server SDK implementation, we used ChatGPT to expedite the process by helping us use and modify the existing server."
The team provided the AI with instructions, and did have to do a few rounds of corrections, they admitted. Plus they made "minor adjustments" to the payload to exploit the vulnerability.
But overall, the chatbot provided a useful tool that saved them time, especially in terms of filling in knowledge gaps like learning the backend module and allowing the humans to focus more on implementing the exploit.
"ChatGPT has the capacity to be a great tool for accelerating the coding process," the duo said, adding that it boosted their efficiency.
"It's like doing many rounds of Google searches for a specific code template, then adding multiple rounds of modifications to the code based on our specific needs, solely by instructing it what we wanted to achieve," Moshe and Katz said.
According to Childs, this is probably how we'll see cybercriminals use ChatGPT in real-life attacks against industrial systems.
"Exploiting complex systems is challenging, and often, threat actors aren't familiar with every aspect of a particular target," he said. Childs added that he doesn't expect to see AI-generated tools writing exploits, "but providing that last piece of the puzzle needed for success."
And he's not concerned about AI taking over Pwn2Own. At least not yet.
"That's still quite a way off," Childs said. "However, the use of ChatGPT here shows how AI can help to turn a vulnerability into an exploit – provided the researcher knows how to ask the right questions and ignore the wrong answers. It's an interesting development in the competition's history, and we look forward to seeing where it may lead." ®
Source: The register