The good, bad and ugly of vibe coding -- and where it wins
We’ve been here before.
Those red squiggly lines under misspelled words started as simple dictionary lookups. Then they evolved through algorithmic natural language processing (what we used to call processes and application of algorithms) to become machine learning. and now we dress it up with even more clothes fit for the emperor and call it AI — even though it’s still massive matrices and algorithms under the hood.
“Machine learning” sounds like dictionary lookups on steroids and “artificial intelligence” sounds like magic. It’s still the same concept, just at a different scale, with different processing requirements.
We didn’t panic when spellcheck arrived; we learned to use it as a tool.
Sure, there was some grumbling about students getting unearned help with their writing, professors worried that grammar checkers were making people lazy writers. The student didn’t earn the passing grade for grammar; the computer did! It sounds silly now, but the resistance was real.
The current “AI revolution” is really just the latest evolution in a decades-long progression of increasingly sophisticated assistance. A simple dictionary check became a grammar checker, which ended up offering suggestions for style, and now we have machines that analyze the correctness of our writing and assertions and offer their own interpretations of whatever we came up with.
Code error detection follows the same path. Once upon a time we used actual compilation to catch syntax errors, waiting for compilers such as jikes to compile the code, see the problem and report it. The tooling was influenced by the need to create a rapid response cycle, so we could see what we were doing more quickly.
But today, models are sophisticated enough to spot errors before they even hit the compiler. Tools suggest clearer names or types, and even sometimes invent features we didn’t even realize we needed, as we’re recording what we think the code should be. This is a genuine strength, even if it feels uncomfortable at first.
Nevertheless it’s the same fundamental progression: compilation errors begat better linting, then static analysis, and now have LLMs that review our code often before we’re even done writing it.
This is nothing new. We’ve just spray-painted it as if it were.
AI-assisted development: Not magic, just sophisticated
AI-assisted development feels informal and intuitive, but it actually requires deeper technical judgment than non-AI-coding if the intent is to create something that lasts.
The new generalized term vibe coding refers to the process of essentially talking to an LLM, telling it what to do and recording the results. That sounds dismissive, like you’re just winging it with whatever AI suggests.
But done properly, it’s the opposite. Vibe coding can leverage AI as a sophisticated thinking partner while maintaining programming rigor. The “vibe” isn’t about being casual; it’s about finding that flow state where human insight and AI capability complement each other seamlessly.
Most discussions about AI in development get trapped in catastrophic, binary thinking. Either AI is a miraculous productivity multiplier, or it’s going to replace all developers (or both). These conclusions ignore the nuanced reality: AI is already everywhere in our development environment, and the question isn’t whether to use it but how to use it thoughtfully and with skill.
The principles of good tool usage remain the same: assist human judgment and don’t replace it. Synergy is king.
Good vibe coding: AI as a willing collaborator
Good AI usage means developers bounce ideas off a participant that won’t judge or get tired. It’s perfect to cover the tedious but necessary work nobody wants to do.
Here’s an example: documenting public methods whose names nudge readers in the right direction but lack thorough commentary documentation. Nobody wants to write that documentation, especially its author who already understands it. But now the author can ask an AI to analyze it and summarize it. At the very minimum this establishes a starting point for refinement, and the author may even see things missed in the implementation, because the AI gets the summarized requirements wrong.
In this way, vibe coding becomes a collaborative process, and an example of good AI usage. The AI provides scaffolding and starting points; humans provide judgment, refinement, additional specifications and observations about inputs and results that the AI might not be able to infer. The AI responds with what the code actually implies.
Bad vibe coding: The lazy handoff
Bad usage of AI in programming is obvious, but unfortunately common. Someone takes a poorly worded or incomplete specification and says: Go implement this with my favorite language, even though I don’t know it well.
The AI is left to its own devices. It guesses at functionality without human oversight, creating new endpoints and services to fulfill its vision rather than that of the programmer. The AI builds something that looks like the solutions its models represent, rather than work with an invested developer. The result is often outdated technologies, potential CVEs and architectural disasters — if it even works at all.
I’ve thankfully avoided this trap myself (I think) but I’ve seen it expose programmers far too often for my own comfort. Since the coder doesn’t actively monitor or engage with the process, the output is frequently worse than what an inexperienced human would produce with the same poor specification.
The management velocity trap
There’s an air of desperation in how developers see AI — a tool that they themselves created and defined — and it’s because of the way AI is being leveraged.
Managers see AI output and think: If Jimmy had coded this manually, he’d still be working on it three days later, but AI generated it in an hour! And it’s so complete! Amazing! But it’s not complete, and even if it looks amazing on the surface it is not. Neither developers nor managers are really paying full attention.
The AI writes what it thinks is correct, and might even verify that it is “correct” according to its own definitions, but it lacks context for what “correctness” means in your specific domain. The result is that managers expect speed as the only metric that matters, and not correctness. The code has tests, and reports the code is feature complete, so nobody goes back to make sure that it’s actually correct, that it’s truly valid in the problem domain.
But it was generated so quickly! And with so little effort!
Except nobody judges a house renovation by speed alone. The plumber was done in 15 minutes, but then you turn on the shower and discover the plumbing was never connected.
This lazy reliance creates hidden time bombs: fast delivery today but weeks of debugging later, if the generated code is rescuable at all. Managers get immediate gratification while developers inherit the mess, worse than if the AI hadn’t been involved at all, because now they have to rip out all the plumbing the managers declared was “complete” since the AI did it for them, and start over.
Ugly vibe coding: Political lock-in
Here’s where vibe coding gets really ugly. That manager who just saved the company “thousands” with AI productivity won’t, or even can’t, admit the code is garbage.
That manager stood in front of the board and promised 10x or more productivity gains for the cost of a monthly subscription. He or she sent company-wide emails about “embracing the future.” Maybe even got featured in a trade publication for such an innovative approach to development.
Lo and behold, that AI-generated codebase is held together with digital duct tape and wishful thinking. Does the cornered manager admit it was all wrong? Tell the CEO that the company must hire back those expensive developers it just laid off? Most managers will see that acknowledgement of a failed AI transformation as career suicide.
Instead, they’ll double down:
- Hire consultants to “optimize” the AI workflow so rather than 10x developers it’s a 2x “prompt engineer.”‘
- Blame the remaining developers for not “adapting to the new paradigm.”
- Keep pushing metrics that show velocity, and carefully sidestep any that measure actual business value or system reliability.
We’ve seen this movie before. Remember the offshoring rush back in the 2000s? Executives won awards for 80% cost savings only to discover they’d lost 95% of their productivity, and then realize they’d burned the bridges with their local talent. Turned out that “visionary cost transformation” was actually just shifting deck chairs on the Titanic and playing the chamber music just a touch livelier in the shadow of the iceberg.
Such organizations get locked into dysfunction because people can’t admit the problems need fixing, and get rid of the people most able to do so. At least with offshoring there were still humans (just different ones), but coding with AI creates a dependency on systems that literally cannot understand when they’re wrong.
Managers who decide that AI can replace in-house skilled workers, just like offshore development teams were hoped to replace the developers of yesteryear, are simply repeating history, not making it.
The AI development spectrum: From handoff to partnership
Not all developers use AI the same way, and the difference shows up immediately in the quality of what they produce.
On one end of the spectrum are the “prompt and push” people. They hand a vague requirement to the AI, possibly copied directly and verbatim from a GitHub issue, accept whatever comes back and commit it without a second thought: “The AI said it works, ship it!”
These developers are essentially human deployment pipelines. They add no value — they just move bits from the AI to the repository, often with even more assistance from the AI for basic operation. Quite literally, they flying blind on autopilot.
But watch a skilled developer work with AI, and it’s a completely different dance.
Developers who approach AI-enabled coding as a partnership have actual conversations, and more importantly they pushing back. “Your suggestion looks good on paper, but Redis write performance tanks with this kind of time-series data at scale. What about using with a write-through cache?”
They’re catching the gotchas that the AI learned from a thousand blog posts that never mentioned the edge cases. “That aggregation step isn’t strong enough. In practice, we’re going to get submissions to the aggregate over an hour’s time, not in a simple 15-unit batch like your release strategy assumes.”
They’re using AI like a really smart rubber duck that can write code, but one that needs correcting based on the experience of real-world errors and actual requirements.
The difference isn’t in the tool; it’s in the engagement.
Here’s what good developers are using AI to do:
- Architecture reviews, asking it to play devil’s advocate on their designs.
- Generate test cases they wouldn’t have thought of.
- Document complex logic flows.
- Explore unfamiliar frameworks before committing to them.
- Perform deep research that a single set of eyes simply can’t cover well.
These developers get more valuable with AI, not less. They cover more ground, consider more options and produce more robust solutions. They still do the hard work of thinking, just with a tireless assistant who’s read every programming blog post ever written.
The tragic irony? The “prompt and push” developers think they’re being efficient since they’re “shipping fast and first.” But what they’re really doing is creating technical debt that engaged developers will have to clean up later. Guess which group is more productive in the eyes of management which thinks in terms of commit counts and shipping speed?
Before and after AI-enabled coding
We’ve all seen projects fail because no one properly defined requirements. With AI such failures are amplified, and accountability vanishes.
Let’s look at the coding cycle before and after the introduction of AI coding help:
Before AI: A manager gives vague requirements. The developer codes to their assumptions. The manager spits back that the developer got it wrong despite not being told what “it” really was. Eventually, following that cycle, something resembling the real product emerges.
With AI: The manager gives vague requirements. The developer echoes those requirements to an AI, which then confidently spits out a complete solution, filling in gaps for the architecture and requirements based on common solutions to similar problems. Everyone celebrates.
…until they realize that AI left out hooks for the rules engine that the manager knew would be mandatory, but didn’t observe in the project request. And the AI slapped in a database that used insecure data storage requirements just like all the tutorials did. And the AI-built system use cloud resources nobody intended to allocate. And nobody can explain why the system works the way it does.
When a human developer builds the wrong thing, at least they can try to explain why they built it that way. They made assumptions, but they were their assumptions.
With AI-generated code, you get a system built on the assumptions of whomever wrote the Stack Overflow posts in 2019, plus that one crank on Quora who was convinced and convincing about mandating multiple indirect references.
Here’s what we’re seeing happen. Companies aren’t just “saving developer salaries.” They’re also skipping questions, pushback and clarification — you know, those annoying parts where developers ask stakeholders to actually think about what they want. The AI never asks, “…but what happens when two users try to do this simultaneously?” It just generates what it thinks will work and moves on, because it tends to emphasize “successful delivery” over everything else.
A traditional approach would take three months to understand the problem and two months to build the solution and still be cheaper and faster. But that requires admitting that understanding the problem is actual work, not just overhead that an AI can eliminate.
Defenders of AI-assisted coding will say that requirements change anyway. and that’s true. It’s been true in pretty much every project I’ve seen in the real world. Requirements evolve as we learn more about the problem domain. But there’s a difference between evolving from a solid foundation and constantly patching a house of cards.
When you understand the problem space, changes are just iterations and revisions. When you don’t know what’s going on, when you simply echo whatever the AI says without consideration, every change is a crisis and a rewrite. Circumstances change; requirements evolve.
The challenge is to balance upfront clarity with the inevitable iteration. AI amplifies the speed of bad decisions, and can easily amplify the cost of changing direction later.
The ideal AI-era developer
We’ve talked a good bit about how developers and managers can use AI poorly. If it’s so hard to get right, why do people rush more and more to use it?
Truthfully, AI has the potential to be an incredible force multiplier. It can take developers from 1x to 5x, from 10x to something beyond — if they use the tools properly.
So what does proper usage of AI look like?
Successful developers in the AI-assisted age stay actively engaged with the tools, and examine and refine AI output carefully rather than blindly accept it. These developers maintain stakeholder engagement to ensure correct requirements regardless of implementation methodology.
The development process is rigorous whether they work the same way whether coding “by hand” or with AI assistance. The only difference is that they’re no longer as necessarily limited by what they themselves can type or remember offhand.
This should have always been the standard, but many managers and stakeholders avoided the work of clearly defining what they actually wanted. They assumed it would be easy to demand refinements as they learned, and developers let their egos determine what they were willing to accept.
AI makes these lazy approaches even more dangerous. Bad requirements are implemented faster. AI assistants often reinforce programmer egos, cheering devs that whatever they come up with is “such a smart idea!” Good developers force the requirements conversation upfront, whether AI is involved or not. I have many co-workers who can attest to how annoying I can be about this, myself, and demand accountability of themselves and from the tools.
AI and cognitive atrophy: Losing the ability to apply context
Some developers worry that there’s a much bigger risk with AI tools: they’ll make us intellectually dependent, creating a kind of cognitive atrophy. Bob Reselman argues that just as calculators supposedly diminished students’ ability to do mental math, AI might diminish our ability to think through programming problems.
That concern is valid, but misses a crucial distinction.
We’ve traded specific mechanical skills for broader capabilities throughout history. Many developers today couldn’t build their own compiler from scratch, wire their own servers or even optimize assembly code by hand. Most such skills end at creating a simple DSL or fluent API using language features that are already supported. Does that make us less intelligent than developers from the 1970s, when it was common to roll your own systems? Or have we simply reallocated our cognitive resources to higher-level problems, such as distributed systems, user experience and business logic, that earlier generations couldn’t tackle because they were too busy worrying about specific register allocations in their generated code?
The real risk with AI-enabled tasks isn’t that we’ll lose the ability to write for-loops manually. It’s that we’ll lose the ability to recognize when a for-loop is the wrong abstraction entirely. Not that we can’t perform the mechanics of coding, but we stop engaging with the fundamental questions: What problems are we actually solving? What are the edge cases? What happens when this scales?
A student who can perfectly execute long division but can’t spot when a calculator returns “2+2=5” doesn’t lack calculation ability — they lack the concept of verification. They miss the forest because of all those trees. Similarly, a developer who can’t spot architectural disasters in AI-generated code isn’t missing mechanical coding skills, they lack critical thinking and domain expertise.
Foundational knowledge that matters isn’t rote memorization. It’s understanding systems, recognizing patterns and knowing when something doesn’t feel right.
These are exactly the skills that make “vibe coding” effective, rather than dangerous.
The reality of AI ubiquity
In case you haven’t noticed, AI is already everywhere.
Whether you write in Word, Google Docs, or even text editors such as Sublime Text, AI is already assisting through tools including Grammarly and ProWritingAid, or even built-in suggestions or autocomplete.
Lately, IDEs have AI baked in too. It’s almost more effort to prevent AI engagement on some level with your code than it is to get full engagement, where Claude, CoPilot, or Junie can see your entire repository in its context.
We often use AI without realizing it, because we’re so used to it and it’s simply so helpful. Coding without any assistance feels like coding with a hammer and chisel. You can code with simple editors like Notepad, but why would you? Even vim has coding assistance now.
The question isn’t, “Should we use the machine to help us code?” Rather, it’s: “How do we use it thoughtfully? How can we get it to help us best?” Good developers learn to work with their ecosystem, including AI, rather than pretend it doesn’t exist.
Where vibe coding actually wins
In my view and experience, vibe coding works best when you treat AI like a skilled junior developer. It can handle routine tasks and offer fresh perspectives, but it requires oversight and direction.
Here’s a list of what AI-enabled coding is especially good at:
- Offering conventional wisdom to serve as a metric against a design.
- Generating tons of repetitive code without pride or resentment, and applying changes on request.
- Responding quickly, even to obscure questions because it can cite references far beyond most humans’ ability to recall. (Its feedback tends to be a little toopositive, as humans respond best to praise over frank honesty.)
- Creating prototypes at an incredible clip.
With precise and firm direction, AI assistants can often write better code than you’d be able to write yourself. It can anticipate conditions more completely than a programmer might — just think of how many times you’ve written code using a reference, assuming that there’s no way that reference could be null.
Vibe coding wins when engineering disciplines that should have always existed are maintained and enforced: clear requirements, active stakeholder engagement, rigorous testing and thoughtful architecture. Even if you don’t have those things, demand them to the best of your ability from the AI yield positive results.
Developers who are thriving with AI aren’t using it to replace thinking. They’re using it as a force multiplier for thinking: cover gaps, handle tedium and expand capabilities without replacing human judgment.
Vibe coding is about finding the natural rhythm where human insight and artificial intelligence create something better than either could produce alone. But only if developers stay in the driver’s seat.
Joseph B. Ottinger has held senior roles in software engineering and project management. He’s written countless articles and multiple books on various languages, architectures and implementations, including Hibernate and Spring.