Large language models are rapidly reshaping both web applications and the threat landscape. This article examines how web LLM attacks exploit prompts, tools, and APIs to compromise data and systems, and why Invicti’s DAST with dedicated LLM checks enables organizations to detect and manage these new AI-driven risks.
Enterprises are racing to embed large language models (LLMs) into customer-facing and internal applications. Chatbots manage customer interactions, AI assistants analyze content, and intelligent interfaces and agents automate routine tasks. As adoption accelerates, the attack surface expands beyond traditional web vulnerabilities to include model behaviors, tools, and integrations.
Similar to the way that server-side request forgery can indirectly expose unseen backend systems, web LLM attacks exploit the model’s extended access and trust. A malicious prompt or manipulated data source can now cause an AI-backed application to execute commands, query internal APIs, or expose sensitive data.
For security leaders, this is more than a technical issue – it’s a major business risk. A single compromised chatbot or misused LLM integration can damage customer trust, trigger compliance issues, and lead to real financial and reputational impact.
A large language model is an AI system trained on extensive datasets to generate human-like text. LLMs like GPT, Claude, and Gemini are the engines behind modern AI assistants and applications. Organizations now rely on these models for:
By itself, a large language model can only process language, so all practical LLM-backed applications include additional modules, tools, and integrations to perform non-language operations and interface with external systems. And each integration increases the potential attack surface – especially when models interact with business APIs, databases, or user-generated content.
Prompt injection remains the most common and dangerous LLM exploit. Attackers insert malicious instructions into prompts to override the model’s intended behavior. Prompts can be injected either directly or indirectly:
Such attacks allow adversaries to manipulate an LLM-backed application into leaking confidential data, executing code, or accessing connected systems. Research has demonstrated that even indirect prompts hidden within PDFs or web content can trigger dangerous actions when parsed by an LLM – read the Invicti e-book Prompt Injection Attacks on Applications That Use LLMs to learn more.
Invicti DAST includes dedicated security checks for LLM prompt injection and LLM prompt leakage to detect cases where model inputs or responses expose sensitive data or demonstrate unsafe context handling.
Modern LLM applications can perform actions beyond generating text. Through function calling and tool use, they can access APIs, databases, or connected systems. Without strict controls, this creates conditions for excessive agency, where the LLM acts on unintended commands or executes high-privilege operations.
Attackers can chain prompt manipulation with API misuse to escalate privileges, invoke hidden functions, or alter stored data. In a way, these risks mirror supply chain vulnerabilities, where a compromised integration can compromise entire workflows.
Invicti’s LLM command injection and LLM tool usage checks identify such issues by simulating malicious prompts that attempt to call restricted functions or APIs through model interfaces.
Just as with more traditional web applications, unsafe handling of model output can introduce client-side risks like cross-site scripting (XSS) and cross-site request forgery (CSRF). When LLM responses are dynamically rendered in browsers or downstream systems without sanitization, embedded scripts or HTML may execute.
Invicti’s check for LLM insecure output handling detects these vulnerabilities, ensuring AI-generated responses are properly encoded and sanitized before being used in web interfaces or API responses.
Data used to train or fine-tune LLMs can also become an attack vector or open up attack avenues. Poisoned or manipulated data may introduce hidden instructions, bias, or exploitable behavior. Likewise, when confidential data is fed into an external LLM without strict control, it can resurface in responses or leak through inference, resulting in compliance as well as security risks.
Tampering with training data can be extremely hard to detect because it affects the responses and logic of a large language model but often doesn’t result in anything you would call a vulnerability.Â
LLM vulnerabilities are no longer theoretical. Security researchers have shown that seemingly harmless interactions can trigger data exfiltration through prompt injection or unauthorized function execution through API misuse. Among many other things, attackers can:
The business impact can be significant, as with any other data breach. The risk is compounded by the rapid adoption of LLM-backed features into all sorts of systems and contexts without a full understanding of the potential security implications.
Due to their unprecedented complexity and often proprietary nature, LLMs are essentially black boxes. Testing an LLM-enabled web application thus requires the same black-box, outside-in perspective that dynamic application security testing (DAST) provides. Invicti’s DAST engine takes full advantage of this by applying proven testing methods in the context of LLMs to automatically scan for dangerous patterns and behaviors.
Security teams should:
By integrating LLM-specific scanning checks, Invicti enables teams to detect and assess AI applications with the same level of precision and automation used for its web frontend and API discovery and security testing.
The LLM model detected and LLM response pattern detected checks in Invicti DAST help teams inventory and monitor where LLMs are in use across applications, ensuring visibility into systems that may process or expose sensitive data.
Invicti’s LLM scanning capabilities extend traditional AppSec into AI domains by identifying unmanaged apps, prompt-based vulnerabilities, insecure tool usage, and improper output handling within one unified DAST workflow.
LLM security is far more than an AppSec challenge – it’s a cross-disciplinary effort that spans data governance, AI development, and security operations. Enterprises must continuously monitor AI interactions for anomalies, integrate security checks into CI/CD pipelines, and extend AppSec as well as InfoSec posture management to cover AI systems.
The AI boom has seen LLM-backed software appearing in all sorts of systems and contexts, making it imperative to incorporate LLM risks into a wider cybersecurity program. Invicti’s application security platform provides automated scanning, correlation, and reporting across web applications, APIs, and LLM-powered services to help security and DevSecOps teams manage AI-related risks alongside their existing AppSec workflows.
Web LLM attacks represent the next phase of web security evolution, with the added twist that malicious natural language can now be as much of a threat as malicious code. As AI becomes embedded in the digital enterprise, protecting LLM systems is as critical as defending any other application layer, if not more so.
Proactive testing using LLM-specific DAST checks on the Invicti Platform allows organizations to expand their AppSec efforts to also cover finding exposed LLMs and testing them for vulnerabilities to fix any issues before they are exploited. Because these checks are performed as part of a comprehensive application and API scanning process built around Invicti’s proof-based scanning, the result is improved all-around security and visibility – including LLMs.
Learn more about the Invicti DAST scan engine, including LLM-specific checks.
They are exploits targeting applications integrated with large language models, typically through prompt injection, tool usage abuse, or insecure APIs. Poisoned training data may pose an additional risk.
Attackers can craft natural-language commands that manipulate an LLM into revealing data or performing unintended actions, such as calling sensitive APIs or executing malicious code.
In many business contexts, LLMs do more than provide information and often perform API calls and other actions on behalf of users. If given too much agency and too little oversight, vulnerable LLMs can allow attackers to misuse APIs to escalate privileges, leak data, or chain vulnerabilities.
By enforcing API authentication, sanitizing outputs, minimizing sensitive data exposure, and red-teaming AI integrations with adversarial prompts. These actions and policies should be combined with a systematic program of automated discovery and testing that covers all applications and APIs, including LLM-backed systems.
No, input restrictions such as filtering should never be the only line of defense. Similar to XSS filtering, attackers may be able to bypass prompt restrictions through various jailbreaking techniques. Security must be embedded at the architectural and API level, and enforced through systematic scanning.