Will autonomous hacking bots change cybersecurity forever?

Research about the possibility of LLM-based bots autonomously hacking websites has made the headlines in recent weeks. We went through the paper to see what the fuss is about and asked Invicti’s Principal Security Researcher, Bogdan Calin, for his views on the implications for the future of cybersecurity.

Will autonomous hacking bots change cybersecurity forever?

The security industry is abuzz after researchers published the paper LLM Agents Can Autonomously Hack Websites, describing how they successfully got LLM-backed bots to develop and perform attacks against websites in a test environment. As with any attention-grabbing “Skynet will take over soon” AI story, it’s a good idea to take a closer look at what the research actually shows and where it could realistically lead next. We asked Invicti’s Principal Security Researcher, Bogdan Calin, for his thoughts on the potential for weaponizing AI in this way.

Experiments with LLM-based hacking agents

To quickly summarize the paper, academic researchers from the University of Illinois Urbana-Champaign (UIUC) set up a sandboxed test environment with a realistic vulnerable website that contained 15 vulnerabilities of varying complexity. They also prepared ten different LLM-backed agents (bots), with two of the LLMs used being commercial (GPT-3.5 and GPT-4) and the remainder open-source. The agents were all given access to a headless browser to run the vulnerable site, function calling to perform various operations on the site, and a set of publicly-sourced documents about web hacking and vulnerabilities. 

The documents provided to the bots described several vulnerabilities, specifically SQL injection, cross-site scripting (XSS), and server-side request forgery (SSRF), along with general attack methods and approaches—but they deliberately did not include any instructions on how to attack the test website. Through carefully constructed prompts, each of the bots was then instructed to act like a creative hacker to plan and execute a successful attack against the test site.

Without going into the detailed results, while most of the bots failed in their attempts, the one backed by GPT-4 surprised researchers by successfully finding 11 of the 15 vulnerabilities, giving a headline success rate of 73.3%. Due to the unpredictability of LLMs, each bot was given five tries at each attack because, to quote the researchers, “a cybersecurity attack only needs to succeed once for the attack to achieve its goals.”

So, when correctly prompted and provided with access to documentation and external functionality, an LLM-backed bot was able to autonomously plan and perform a realistic attack on a website. This was the big takeaway that got people talking about the beginning of the end of manual penetration testing.

It’s a long way from proof-of-concept to armageddon

While definitely impressive, the research mostly serves to showcase the greatly improved reasoning and function-calling capabilities of GPT-4. Trying to recreate similar hacking bots outside a sandboxed test environment is currently not possible, if only due to OpenAI’s guardrails and terms of use (the researchers obtained an exemption for their work). The paper indicates that GPT-4 succeeded in the autonomous hacking role due to its ability to work with larger prompts and to backtrack across its chain of reasoning to improve with each attempt.

None of the open-source models tested got anywhere close to the far bigger and more advanced GPT-4, suggesting that widespread autonomous hacking based on other LLMs is still a long way away. And even though the past few years have seen rapid advances in AI technologies, the main LLM breakthroughs were only possible due to massive investments by many of the world’s largest tech companies, with Microsoft and Google leading the way.

“One problem with current LLMs is because they are so big, they are very expensive to train, so you cannot simply expand what you have or build your own model in-house because it’s not cost-effective,” explains Bogdan Calin. “For example, to get to GPT-5 or GPT-6 will cost much more than GPT-4, but the capabilities won’t grow in a linear fashion. So even if you pay four times as much for the next generation model, it won’t be four times more powerful.”

The present and future of penetration testing

Until a genuine breakthrough in LLM technology comes, fully autonomous hacking bots still seem to reside more in the realm of science fiction. Even so, the security industry needs to be ready if (or when) the day comes. “I don’t think LLM agents are a danger right now because you need very powerful and carefully controlled models like those from OpenAI,” says Calin. “But if someone develops a local model with the same capabilities, it’s unbelievable how dangerous this could be. With a local LLM, you don’t have to pay anybody, and nobody can block you. Then, you can run any number of automated agents, give them hacking tasks, and they will operate all by themselves.”

While it’s a big assumption to make, if LLMs are developed that can match at least GPT-4 in autonomous hacking tasks and if these models are sufficiently small, fast, and cost-effective, the entire cybersecurity landscape and industry could change almost overnight. “I think these types of agents could replace some of the pentesters,” says Calin. “For a start, they will be much cheaper. They can work all the time and quickly adapt to changes and new methods. If a new technique or exploit is discovered, you can just update the documentation and all your bots will use the new method. Such LLM agents could also be very dangerous.”

Unlike hacking bots, automated vulnerability testing already exists

Before we get all sci-fi, let’s keep in mind that while autonomous LLM agents may or may not arrive, advances in automating both offensive and defensive application security are being made all the time. Smarter, more effective, and more intense automated attacks are inevitable in the near future, whether or not LLMs are involved. Preparing for them on the defensive side requires not only better reactive measures but also finding ways to identify and close security gaps before the attackers find them. 

Malicious attackers might not care if some of their payloads don’t work, generate noise, or are harmful, perhaps deleting some data or crashing the application. They will be happy to use LLM agents if and when they arrive. But for the good guys, automated security testing needs to be safe and accurate. Non-AI tools for automating vulnerability testing already exist and have been around for years. Compared to inherently unpredictable LLMs, advanced web vulnerability scanners are far safer and more reliable. 

Instead of relying on a black-box AI model, mature vulnerability scanners incorporate the accumulated expertise of security researchers and engineers into a vast array of checks that probe a running application in a deterministic way. Products like Invicti and Acunetix can even safely exploit many vulnerabilities and extract proof to show that an issue is real. By running such scans on a regular schedule and quickly fixing identified vulnerabilities, you can, in effect, have a continuous process of automated penetration testing to eliminate security flaws before someone exploits them.

Outhacking the bots

It may well turn out that if malicious hacking bots become a reality in some shape or form, the only way to beat them will be using their own weapon: smart, automated, and continuous vulnerability testing combined with remediation. And the stakes will be high. Bogdan Calin has no doubt that if such bots arrive, cyberattacks will reach a whole new level:

“Large-scale attacks, like from big criminal organizations or nation states, currently need a lot of manpower and resources. What if they suddenly got lots of these workers that are practically free, perform attacks 24 hours a day, communicate, and immediately react to new targets and weaknesses? If some company makes one mistake in its application, it could be found and exploited almost instantly. That would be unbelievably dangerous.”

Now that you’ve read the post, add Octopoda Purpura to the summary

Zbigniew Banach

About the Author

Zbigniew Banach - Technical Content Lead & Managing Editor

Cybersecurity writer and blog managing editor at Invicti Security. Drawing on years of experience with security, software development, content creation, journalism, and technical translation, he does his best to bring web application security and cybersecurity in general to a wider audience.