AI-driven code security is advancing quickly, but it does not replace the need for real-world validation. This perspective explains why runtime testing remains the foundation of effective application security – and how AI should extend it, not replace it.

The application security community is experiencing one of its most important inflection points in decades.
With Anthropic’s Project Glasswing and the broader Mythos narrative, we’re seeing a wave of innovation that is pushing the boundaries of what AI can do in security, particularly in code understanding, reasoning, and autonomous workflows.
It’s natural that moments like this generate both excitement and paranoia. When technology moves this fast, the question isn’t whether things will change – but how.
At Invicti, our perspective is clear: this is a breakthrough moment but not a replacement moment.
Anthropic’s work, along with others in the AI ecosystem, demonstrates meaningful advances in:
These are important developments. They will improve how developers write code, how security teams triage issues, and how organizations think about software risk earlier in the lifecycle.
At the same time, the security community is analyzing reported Mythos findings and exploring alternative ways to get similar outcomes. Respected security leaders are also already recommending LLM-based security analysis as a routine new layer of application security.
We are strong believers in this direction. In fact, we’ve been building toward it.
At Invicti, we are not reacting to this shift – we are part of it.
Our platform is evolving with AI as the tip of the spear, not an afterthought:
But here’s the critical difference: We are not replacing what works. We are amplifying it.
Our approach combines AI with a mature, battle-tested runtime engine to ensure that results are not just intelligent, but verifiable and actionable.
As we often say internally: discovery can be probabilistic but validation must be deterministic.
This principle is foundational to how we build.
Much of the current conversation, including Project Glasswing itself, is centered on code security and static analysis.
That’s still valuable, but it’s only part of the story.
Applications don’t run in source code. They run in production.
And that’s where risk becomes real.
Runtime security answers questions that static analysis fundamentally cannot:
This is why Invicti’s runtime validation remains the last line of truth in application security.
In a world of AI-generated code, this becomes even more critical.
AI can write code faster than ever. It can also introduce hidden security debt at scale.
Without runtime validation, organizations risk accumulating vulnerabilities they don’t even know exist.
There is also a growing narrative that fully autonomous, AI-driven pentesting is ready to replace traditional approaches.
The reality is more nuanced: Agentic pentesting is promising, but it is still early.
From our own development experience:
Some emerging vendors are pricing agentic pentests at $8,000+ per engagement while themselves still operating on venture-backed economics.
That’s not a scalable model – at least yet.
Over time, costs will come down as infrastructure improves. But market dynamics are complex. As long as buyers are subsidized by risk capital, pricing may not follow traditional curves immediately.
This is exactly why we at Invicti believe:
Agentic pentesting needs a DAST foundation, not a replacement strategy.
Our approach reflects that reality. Today’s DAST is extremely powerful, but it has traditionally relied on predefined checks. This makes it highly effective at finding known vulnerability patterns, while more limited when issues require deeper context or multi-step reasoning.
What’s changing is an evolution, not a replacement. We are layering AI-driven agents on top of a proven engine. These agents can generate payloads dynamically, adapt to application behavior in real time, and chain smaller issues into realistic attack scenarios.
The result is not a new system replacing DAST but a meaningful expansion that moves it beyond pattern matching into contextual, attacker-like exploration.
By combining intelligent agents with a proven scan engine, we are ensuring that:
This hybrid model is not just more practical – it’s what customers need today!
And importantly, it’s how we avoid one of the biggest risks in AI security: hallucinated vulnerabilities.
As outlined in our internal approach, we separate exploration from validation, ensuring only confirmed issues are surfaced to customers.
In times of rapid change, fundamentals matter.
Invicti brings:
Independent validation reinforces this position. Recent Miercom testing highlights Invicti as a leader in runtime accuracy and performance, underscoring what we’ve always prioritized: precision over noise, and proof over possibility.
It’s also important to zoom out.
The application security market is expanding rapidly, with forecasts pointing to tens of billions in growth over the next decade.
At the same time:
This is not a market being disrupted out of existence – it is a market being redefined and expanded.
And in that world, multiple approaches will coexist:
The winners will be those who integrate these layers effectively.
At Invicti, our mission has not changed. We make web applications and APIs secure in a way that is accurate, scalable, and trusted.
AI accelerates that mission. It does not replace it.
We will continue to:
Breakthroughs like Project Glasswing are exciting, as they should be – they move the industry forward.
But progress in security is rarely about replacing one layer with another. It’s about building stronger, more complete systems.
The future of application security will not be purely static. It will not be purely agentic.
It will be grounded in runtime truth, enhanced by AI.
That’s the future we’re building at Invicti. And we’re just getting started.
