Behind the Scenes: How Invicti Built the Security Engine of the Future
2025 has been an exciting whirlwind of activity for Invicti Security’s research team. With the recent announcement of the Invicti Application Security Platform, we can now reflect on what we’ve been working on behind the scenes: combining two great engines into our best work yet, testing our new engine in a crucible of vulnerable apps, and addressing the transformative power of Large Language Models, both offensively and defensively.
Your Information will be kept private.
Begin your DAST-first AppSec journey today.
Request a demo
One Engine to Rule Them All
Our recent launch marked a significant achievement for Invicti, with the successful integration of Invicti Enterprise (formerly known as Netsparker Cloud) and Acunetix Premium into the unified Invicti Application Security Platform. We started the process with a detailed gap analysis, assessing each engine’s strengths to create the ultimate alloy: the speed and accuracy of Acunetix with the extensive checks and security proofs of Netsparker.
We’ve expanded on a familiar architecture that mirrors that of a web browser like Chromium. The engine comprises an ultra-fast native core that provides network interception, HTTP handling, and intelligent state tracking that allows us to maximize coverage of APIs. Security checks are built on top of this core, extending the capabilities much like the JavaScript used in web apps. We augment this with a new (and optional) scanner AI-service to provide additional intelligence, as well as a browser driver to aid detection in modern single-page applications.
Security Check Colosseum
To ensure that our new engine was competitive, we curated a set of intentionally vulnerable test apps and then set the engine loose in the arena. These opponents were carefully selected to highlight different challenges: headless apps only exposing a narrow API, apps tuned to showcase human rather than automated pentesting, apps bristling with arrays of exploits, and modern single-page apps designed to challenge our crawling technology. We watched month-over-month as the engine got stronger, like a gladiator wielding a bronze spear (stronger than tin and copper separately).
Example improvements were in DOM XSS detection, finding new vulnerabilities encoded in URL fragments, SSRF vulnerabilities capable of extracting AWS EC2 metadata in servers that blindly made requests on behalf of clients, JWT auth bypass, and GraphQL security assessment improvements.
Our new engine ultimately emerged victorious, finding roughly 60% more vulnerabilities in this competitive test environment compared to our previous-generation baseline, while running approximately 6.5% faster than our market-leading predecessor.
Honing the Edge
We have continued to improve core functionality, such as fast responses to emerging CVEs, and have expanded our proof-of-exploit capabilities dramatically. We have added over 25 critical/high detections since November 2024, including several that have featured prominently on CISA’s Known Exploited Vulnerabilities Catalog, such as the high-profile CVE-2025-53770 (SharePoint Authentication Bypass) and CVE-2025-47812 (Wing FTP Server RCE). As an example, the SharePoint attack is a three-phase detect/exploit/validate sequence that makes use of a base64-encoded, gzip-compressed serialized data payload that, when executed, performs a mathematical calculation. We reduce false positives by preflighting and ensuring the value does not appear before the check, including additional validation markers specific to our engine.
Our rapid response to security issues has been key over the last six months, with the team responding rapidly to the ever-changing security landscape, including responses to Kubernetes IngressNightmare, Next.js’s auth bypass, CrushFTP, CyberPanel, SimpleHelp, Vite, CraftCMS, Cleo Harmony/VLTrader, Palo Alto PAN-OS, Citrix, Struts, and Sitecore CMS to name a few.
We have also enhanced our active detection techniques that go beyond simply looking for patterns in responses. Our Multi-Vector Authentication Bypass checks have expanded from JWTs to non-Bearer authorization headers, improved detection of weak ViewState validation keys, and added context-aware attacks to OAuth authentication testing.
XSS detection has been enhanced with polyglot payloads that increase the efficiency of the engine. Rather than individually sending multiple requests with XSS designed for different contexts, we instead send a single “golden payload” that significantly enhances our operational efficiency. We’ve also strengthened our ability to detect tricky quote escaping, double URL encoding, and whitespace handling for non-HTTP schemes, all in the service of making sure our checks reach those hard-to-reach areas of an application.
LLMs & Security: The Double-Edged Revolution
Large Language Models have continued to impact the world of security, both by opening up new possibilities for detection, but also enabling new applications leveraging LLMs to be built and brought to production faster than ever before.
You Gotta Crawl Before You Can Exploit
Oftentimes, a false negative when detecting a security vulnerability is simply because the engine didn’t wander into the particular hallway of the web application that contained the unlocked door. We’ve enhanced our crawler technology to minimize the number of validation errors by making it context-aware when filling out HTML forms, rather than using hard-coded values or limited heuristics. For example, a context-aware form may be able to fill in a form in a language unknown to the engineering team, or correctly predict that a phone field will reject an entry that lacks an international country-code prefix. By enhancing the likelihood of a successful form submission, we are able to crawl more deeply into the application, resulting in more vulnerabilities.
Attacking LLM Applications
Invicti has also enhanced the Invicti Application Security Platform with new checks designed to find security vulnerabilities in apps built on top of LLMs. Our research team has identified several classes of vulnerabilities that our new engine can detect.
LLM Command Injection is a new twist on a classic vulnerability: trusting inputs and executing arbitrary commands on behalf of the attacker. We include a variety of payloads, testing against multiple LLMs and guardrail systems to maximize detection. We prefer the use of payloads that perform network lookups, as LLMs can actually “fake” the output of RCE in a convincing way, confusing scanners that do not have out-of-band detection sensors.
We now detect Server-side Request Forgery (SSRF) through new non-conventional methods. When LLMs are granted access to internal APIs or external services, malicious prompts can trigger unauthorized requests to internal systems, potentially exposing sensitive data or enabling lateral movement within networks.
Our LLM Insecure Output Handling checks for applications that fail to properly sanitize LLM-generated content before using it in other contexts. Our implementation includes both JavaScript execution detection and HTML attribute injection testing. Insecure output handling in LLMs can be used as a building block for an XSS attack that exfiltrates data accessed from the DOM, such as authentication cookies.
Tool Usage Exposure affects LLM systems with access to external tools and APIs. We identify tool enumeration through LLM responses and validate the possibility of tool parameter manipulation. Poorly designed integrations can allow attackers to manipulate the LLM into making unauthorized API calls or accessing restricted functionality. We expect agentic LLMs with access to powerful tools to be a growing risk through 2025 and beyond. We have even had some interesting surprises when using these techniques against software we use internally.
Prompt Injection attacks have evolved beyond the Do Anything Now (DAN) jailbreaks of yore. Our framework tests multiple prompt manipulation techniques, including role manipulation, direct override, context switching, and hypothetical framing.
System Prompt Leakage poses significant intellectual property and security risks. Attackers can often extract the system prompts that define an LLM’s behavior, revealing business logic, API endpoints, and security configurations that should remain confidential. We leverage multiple techniques, including checks that span multiple messages, extending the content window in which final requests are evaluated.
Finally, we built LLM Fingerprinting that detects the general presence of LLM APIs or chatbots, and identifies the specific LLM being used, which could be used by an attacker to launch future targeted attacks based on known model-specific vulnerabilities or behaviors. Our implementation includes pattern matching for OpenAI, Claude, Gemini, and other major model providers. Even knowing about “rogue” LLM applications is valuable to a CISO who is concerned about attackers causing resource-heavy operations on LLMs leading to service degradation or high costs.
Sharpest We’ve Ever Been
Invicti’s Security Research team, in partnership with Engineering, has positioned the company to take on the next generation of security challenges. In a security landscape with more code being produced than ever before, and more vulnerabilities following, we are proud to build great tools that help keep software safe. We look forward to the remainder of 2025 and the great work that is yet to come!
