Vibe talking: Dan Murphy on the promises, pitfalls, and insecurities of vibe coding

Vibe coding is one of the hottest trends in software right now, promising to radically change how we build apps by using natural language instead of traditional programming. But beyond the buzz, what does it actually mean and what are the risks?

Vibe talking: Dan Murphy on the promises, pitfalls, and insecurities of vibe coding

We sat down with cybersecurity veteran and Invicti’s Chief Architect, Dan Murphy, to unpack what vibe coding is, where it’s headed, and why everyone involved with application security should be paying very close attention.

Dan Murphy, Chief Architect, Invicti Security

What is vibe coding, really?

Dan, for those only hearing the term now or simply confused by the current hype—what’s vibe coding to you?

Dan Murphy: To go back to the source, OpenAI co-founder Andrej Karpathy was the one who coined the term. We’ve had AI coding assistants for a while, but vibe coding is different: it’s about letting AI take the wheel now. Instead of typing code line by line and getting suggestions, you just say in English, “Let’s create a React app that does A, B, and C, and make it look like X, Y, and Z,” and it gives you the code. As Karpathy himself put it, the hottest new programming language is English—and that nails it, really.

The allure of vibe coding: Speed and democratization of development

As you said, AI code assistants have been pretty much accepted as routine development tools. What’s the great appeal of vibe coding compared to “regular” AI-assisted development? Is it really such a big deal or just another hype wagon?

Dan Murphy: I do think that it’s a big deal, and while the hype often exceeds the reality, I think we’re going to see a big impact from it. In a way, vibe coding has democratized software development. So we’re going to see a very viable path for more people, not only software engineers, to create an app that works and looks good and feels good and passes initial scrutiny—at least if you’re only interested in shipping something fast.

I believe it’s all going to accelerate and things will be coming to market much more quickly, which is not without caveats. To revisit a favorite metaphor of mine, we have supercharged the engine of the car without upgrading the brakes. Our traditional checks—code reviews, humans looking over things—aren’t scaling at the same pace. That imbalance is going to create some interesting problems.

But it’s still not like anyone can just push a button and get an app, right? It’s like that eternal promise of no-code tools where anyone can be a developer. The barrier to entry is now much lower, but you still need to know what you’re doing and what you’re asking for.

Dan Murphy: It’s not going to replace traditional coding anytime soon, but it’s definitely a big shift. And unlike no-code or low-code tools, it’s generating real code under the hood—you’re just solving problems at a higher level, which can be freeing but gets tricky in other ways, including security.

Ultimately, it’s still the difference between a skilled craftsperson using the tool and someone just tinkering. There’s value in that skill. It’s like that quote from Kent Beck: 90% of my skillset has dropped in value, but the remaining 10% is now astronomically more valuable.

Right now, vibe coding works great for senior people who already know how things work and know what to prompt for. If you don’t know the right questions to ask, you won’t get good results. 

Where do you see vibe coding making the biggest difference today?

Dan Murphy: It’s great for reducing the initial activation energy to get something moving. Say you’re not an expert in a particular tech stack—you can still get over that first hurdle and make some real headway quickly.

I’ve vibe-coded occasionally and it’s a cool way to work, but you hit limits fast and at some point, the returns start to diminish. In my opinion, the tech’s great right now for scaffolding and initial builds, but it’s less impressive when you need to enhance big, established codebases where you have to know all the interconnections. It thrives on new apps and smaller, less complex projects. That’s where it shines today.

The challenges: Fragility and hidden complexity

What kinds of limitations have you seen so far with vibe coding?

Dan Murphy: For a start, you can quickly get to a point where your context window fills up—literally and figuratively—and you get stuck. The assistant starts messing things up again and again. And in the process it imports 300+ weird dependencies before doing anything else.

A more general limitation goes back to that skill level because the result is only as good as your prompting. If you don’t provide enough detail, something you’d expect to be a simple operation can be done internally in some weird and insecure way—but then, if you’re only observing it externally, you may never know the difference.

What’s your software architect’s take on vibe coding? Designing the internal structure of applications is your job, yet here we’re getting complete apps that are really black boxes because the developer doesn’t know or care what’s inside or how it works.

I’d counter that as someone in security, all security flaws come down to a single line of code—a weak brick in the wall. If you want your code to be just 99% secure, that’s not good enough. Systems are a web of tiny details, and if even one thing is off, it compromises everything.

In terms of architecture, some of my best experiences with vibe coding have actually been when I’ve got detailed internal guidelines or architectural decision records and I feed them into the prompt. That can work out really well because you have all those things in the context window and they’re referenced. But I do feel that, ironically, vibe coding has heightened the importance of innovation versus rigid architecture, and has also made fast following pretty cheap.

Rapid innovation and prototyping are one thing, but what about the rest of the application lifecycle? What if this black box goes into production and after a while you realize you need to fix bugs, add new features, or connect to some new external system? How do you maintain something if nobody knows how it works?

Dan Murphy: I do believe there’s going to be a whole new class of vibe rescue gigs, where an engineer gets hired into a project and takes a look at the code base and realizes it’s the fever dream of an LLM from four or five years before. And a lot of that work will involve the use of a design pattern that I jokingly call the torch pattern: burn it to the ground and rebuild. We’ve also seen vibe coding advocates seriously suggest that once something isn’t working, you should just nuke it and reimplement instead of fixing.

The security dimension: Risks and blind spots

You mentioned the security risks of running an app that does unexpected things under the hood. I’ve seen someone brag that their tool was vibe-coded in a few days and not only works great but also passes all the SAST scans—clearly a snub to security naysayers.

Dan Murphy: I’m actually less worried about the issues that are detectable by SAST and more about the runtime and contextual ones. 

For a great example of this, it’s not uncommon to have test apps built and deployed using not HTTPS but plain insecure HTTP, with the assumption that when they’re deployed to production, it’s somebody else’s problem to secure them. But what if you don’t know that and you vibe up a little web app that runs locally over plain HTTP, works as expected, and looks beautiful? If that goes directly into production without something like an Nginx reverse proxy to handle the HTTPS part, you could have some serious security issues.

When you just have the isolated app, it’s easy to say, “That won’t show up on a SAST scan.” Sure it won’t—if you just have an app, it’s fine by itself and out of context. But that bigger operational context once it’s in production is where your actual risk lives.

With all of the accelerated development, we’ll have many more apps coming to market and I do think there will be a security lag. Until we catch up with that contextual security oversight, whether it’s with DAST or other automated tools, I think there’s going to be a real gap where we’ll be seeing a lot more vulnerabilities.

You mentioned these tools can pull in lots of dependencies, so supply chain security is probably going to be a massive headache with vibe coding, right?

Dan Murphy: Absolutely, we have seen some pretty weird stuff happen over the last couple of years for supply chain attacks, even without the AI element. We have seen dubious entities target psychologically vulnerable maintainers of open-source projects and attempt to serve up code that had backdoors. We have seen PiPy packages sell out and turn from helpful to hostile. We’ve seen people typosquatting NPM package names, so if you do npm install and you spell something wrong, your app still works, but now you’re potentially pulling in something nasty.

I could totally see this happening and even accelerating with vibe coding. AI hallucination of package names is absolutely a confirmed thing, so you could have people checking for the latest hallucinations and creating those packages on the fly. 

We’re talking about a whole class of attacks that are taking advantage of that implicit trust in the stuff you get back from an LLM. So the tool might say you should totally install this package that maybe didn’t even exist a few moments ago but does now. The developer doesn’t really know what that package is or even that it’s being pulled in, so they run it and it all works and still does the right thing—except now it maybe has a backdoor or is quietly running web shell or is serving malware to users.

What about data privacy—is that still an issue? After the initial uproar, companies seem to have moved on to business as usual when it comes to AI-assisted development.

Dan Murphy: I think every major company that’s producing code now has some sort of AI policy and the concept of sanctioned versus unsanctioned AI use. You want to make sure that you at least know your risk and have a good idea of where your secrets could potentially be ending up. In a lot of these tools, the paid tiers will typically have a policy control where you can opt out of sharing your data for training.

That said, control of your proprietary data always needs to be considered when building with cloud AI/ML engines. When you’re vibing away in your tool of choice, you’ve got to remember all of that code is going somewhere to be used inside an LLM context window, and it takes just one mistake to reveal something you shouldn’t. So if somebody checked in an API key into a project just once, they probably had that go at some point into some LLM training set, especially if devs were using the tools without IT supervision and approval—and that secret could be leaked in somebody’s future code result.

Before all the AI, if you didn’t check your code in, it stayed local. But now it’s all going on the internet. It’s like accidentally pasting your bank password into the Google search bar: maybe not an immediate risk, but you never know what algorithm is ogling your password and where it will end up. Now imagine the same kind of thing happening at scale with company secrets worldwide. Millions of times per day.

The future of vibe coding and vibe AppSec

To wrap things up, how do you expect vibe coding to change application development and security in the long run?

Dan Murphy: For a start, the existing AI-powered trend of increasing developer productivity will only grow with vibe coding. If nothing else, there will be more code getting pumped out more quickly—and if you have twice the code, that usually means twice the security bugs to deal with just because of the greater volume. If security doesn’t find a way to keep up, that could mean a period of more vulnerabilities in production because if somebody has a killer app that they created in days rather than months, they’re not likely to hold back the release for security concerns.

I do believe that securing all those black-box vibe-coded applications will need more focus on automation and especially on the dynamic testing side to catch those contextual security issues that might only show up when the “pure” app is dropped into prod. Sure, running your SAST and getting the AI to fix any reported issues is great, but runtime tools like DAST are probably the best way to automatically check if that killer app of yours can actually get hacked once deployed.

Vibe coding itself is not the bad guy. It’s the erosion of skill and ability to understand how our software systems work that could be dangerous for security.

—Dan Murphy, Chief Architect, Invicti Security

In the longer term, there could be some skill erosion where engineers get so used to getting ready results that they won’t always know or understand all the layers that come below, including all the security layers. There is no limit to human ingenuity, so I have no doubt people will learn and adapt and eventually find ways to produce secure software within this new paradigm, but we risk learning those lessons the hard way: on the back of applications being exploited in production.

Zbigniew Banach

About the Author

Zbigniew Banach - Technical Content Lead & Managing Editor

Cybersecurity writer and blog managing editor at Invicti Security. Drawing on years of experience with security, software development, content creation, journalism, and technical translation, he does his best to bring web application security and cybersecurity in general to a wider audience.