The Next Level of Software Development: The Expert’s Guide to Vibe Coding

Coding is over... or is it? The rise of "vibe coding" has sparked both evangelism and panic in the developer community. With people on X making games and weather apps with no coding experience - and often no desire to acquire it - it's time to look at the unresolved merge conflict: is traditional software development dying, or simply evolving?


The Reality Behind the Hype

The term vibe coding was coined by the OpenAI co-founder Andrej Karpathy in his tweet early 2025. The concept of vibe coding is that one can enter a sort of a Matrix where code is created by talking to the AI without even trying to understand how the code works - if it doesn’t work just ask the AI to redo the feature with tweaked instructions. This is quite unlike the classic coding experience of typing in the code letter by letter, and what made the method even more science fiction, is Karpathy used a voice interpreter to instruct the AI, entirely removing any trace of the traditional way of coding. What we will be focusing on in this article is the possibility of the AI creating code without the developer needing to write or even to fully understand it.

To explore what this new AI capability means in broader context, let's start from the history: the software developer has always been building on the shoulders of giants. In the late 1990s, I earned pocket money coding websites for local businesses using Notepad++ and plain HTML/CSS - tools considered cutting-edge at the time. Today, those same projects I coded with my own hands would likely be handled by platforms like Squarespace or Webflow with zero lines of markup language typed on a clicky keyboard (trendy as they may be). Do I feel sad about this? A bit, to be honest, but equally the kinds of exciting problems I get to work on now with the modern tooling makes me quickly forget about the nostalgia of the hot summers and my trusty Pentium 586.

The increase in the number of websites has been exponential https://www.statista.com/chart/19058/number-of-websites-online/

It's the Innovator's Dilemma at play. WordPress did not eliminate web developers; it exploded the market by making websites accessible to more businesses while simultaneously raising expectations for what professional sites should offer. Websites became more affordable but the demand went up in a hockey stick curve. Win-win in the end, at least if you adopted the new tools.

Similarly to what happened to websites between 2000 and 2020, AI code generation won't eliminate all software tasks but will shift focus to higher-value problems while expanding the overall market for more sophisticated digital solutions. You look at the 240M+ apps built on just one of the self-serve AI platforms Replit and one can hardly claim the market is not expanding. (That's 240+ MILLION - it took websites over 10 years to get to that number.)


The Rise of AI-Powered Development

While the number of apps launched on Replit is truly staggering, the Replit platform is a low-hanging fruit in terms of statistics as we don’t know the quality and the value the apps on it create on average. After all, anyone can go press a few buttons and build an app - it doesn’t mean the app created any value, monetizable or not.

What about serious companies that are in the business of making money? Recent numbers from Y Combinator indicate that approximately a quarter of startups in their latest batch have codebases where 95% of the code is AI-generated. It may sound high, but at Ada Create we’ve lately seen similar figures in our own development processes, so I personally find 95% it to be likely accurate. However, for us this doesn't mean AI is replacing our developers – far from it. Instead, AI is becoming a powerful amplifier of our existing human expertise. Yes, there is change, and yes, it still pays to know how to work with code.


Commoditization vs. Genuine Complexity

Software development is increasingly divided between what's "off the shelf" and what's genuinely new and hard. In the early years of my professional career I worked at the Finnish software agency Futurice and we prided ourselves in constantly pushing the boundaries of what we can build. Our clients always wanted the cutting edge - that is, what the State of the Art tools can be made to do when pushed to the extreme - and we delivered. Over the years the tools evolved, and so did we. The first mobile streaming platform we created with a top-of-the-industry five person team and a big budget in 2011 would have been a walk in the park just five years later. Today, with AI, I could create that platform by myself in a weekend. A streaming platform has become off the shelf.

What we’re seeing is standard UI components, API endpoints, and basic database designs are becoming easily generatable by AI, similar to how not too many people would start their website anymore by typing in the <html> tag. Straightforward bugs can be analysed and resolved with the AI, as long as there are no hidden rocks. Maintenance tasks and large batches of changes across code base can be done in a fraction of the time, making code bases more robust and better maintained. Take the recent AirBnB framework migration as an example; in their publicized framework migration AirBnB updated a software module across all their millions of lines of code - a tedious task that would have taken 18 months now only took 6 weeks. You can take the original estimates with a pinch of salt but the essence remains, AI does speed things up.

What’s the next frontier for developers then after this trench of tooling commoditization? It’s understanding aspects of software like architecture planning, security practices and testing plans, working with specialized domain knowledge, mapping real-world needs into coding solutions, or setting up highly optimized production systems that don’t cost as an arm and a leg. All of these remain challenging even with today’s best AI assistance, and the list goes on. Just because you can set up a streaming platform doesn’t mean it it has a unique selling point or that it can handle 100 million users (as a disclaimer our platform from 2011 definitely couldn’t handle 100 million users.. one built today just might.)

The key skill of the modern developer becomes identifying which parts of development can be delegated to the AI and which require your specialized expertise and oversight. If you’re a developer, get to know your AI tools to find out what they can do for you - you're still early. If you’re less experienced in software, there is actually no better time to start than now. What is possible to build even with limited learning has never been more exciting and more empowering.



The New Development Workflow

Having found a great balance of vibe and software development, I’ll go into more detail of what AI can do for today’s experienced coders. I expect a lot of the ways of working with the AI are not unique to coding, though of course some specifics will be.

To me, by far the biggest shift has been a) Claude 3.7 Large Language Model (LLM) which is just precise enough to not create even more work in cleaning up after it and b) the integration of AI chat into code editors such as Cursor (Windsurf is another one but I haven't tried it yet). The tldr; is, these editors give the AI direct access to your local files and the command line, allowing for frictionless prompting while keeping the agent running until it thinks it has solved the problem at hand (or until it hits the 25 request limit and you need to click continue). On Cursor the hands-off agent is called the YOLO Mode, and despite the name I highly recommend it. The agent usually behaves, but please sandbox the user account you’re letting it use and make sure you comply with any laws or policies applicable.

Like the vanilla ChatGPT many of us have used, Cursor and Windsurf operate with a "conversation" paradigm where the AI maintains a context window of your interactions and the files you're working with. This context is limited, typically 32K-1M tokens (a token is very roughly a syllable) depending on the model, which means long sessions may require refreshing the AI's understanding of your project. Due to this limitation it's important to start and finish tasks in suitably sized slices when letting the AI code - much like it is a good idea start and finish tasks in suitably sized slices when coding without the AI.

Currently at Ada Create we primarily use Claude 3.7 Thinking with these tools, though the recently released Gemini 2.5 shows significant promise in this space as well (albeit seems a bit on the “creative” side), and GPT-4o is at least hot on the heels, if not already ahead. All the models have their own strengths and weaknesses and it often takes a while to “get to know” an LLM model.



The State of the Art AI Advantage: What Works and What Doesn't

Some coding insights we’ve gathered from building production features with AI using NextJS, React and TypeScript:

1. AI excels at:

  • Speeding up straightforward routine development tasks such as creating basic React frontend components or pages (usually needs manual style tweaks)

  • Writing contained algorithms and filling in implementation gaps (it can follow suit if you give it an example, such as an integration test)

  • Reading and maintaining documentation (this is by far the biggest hidden gem for AI coding use)

2. AI struggles with:

  • Genuinely unclear or non-deterministic problems (e.g., there's a fundamental flaw in the setup that makes it impossible to put the square peg through the round hole... but the AI will try relentlessly)

  • Visual design and CSS implementation (it can't "see" the results very well for the time being)

  • Setting up complex projects without supervision (AI struggles to resolve library compatibility, though I can't blame it in the NPM land as it can be really hard)

  • Maintaining consistent context across long development sessions and doing many things in one go (e.g., planning, analysis and code writing in one chat)


Differentiating Task Types: Straightforward vs. Exploratory

Not all tasks are created equal, and your approach should reflect this:

  1. Straightforward Tasks: For tasks with clear requirements and known patterns, direct AI to implement specific solutions with minimal oversight.

  2. Exploration Tasks: For uncertain problems where the path isn't clear, use AI for brainstorming and generating multiple approaches.

  3. Multi-step Complex Tasks: As ironic as it is, despite running on computers, the current LLM AIs are not amazing at precisely following instructions. It’s better to split the tasks into smaller chunks and execute incrementally.

  • Even for straightforward tasks, don't expect perfect "one-shot" solutions - iteration is normal and expected

  • Start exploration by asking "What are some ways we could approach this problem?" Use different agent modes or instruct the AI to specifically not write any code if it's too tempted (it likes to get into the weeds a little too early).

  • When approaching any non-trivial task, by being presented a few options you can harness AI knowledge while maintaining your architectural control. You can often get better results with asking for options upfront even if you just end up letting the AI choose for you


Documentation as Context Memory

As hinted, one of the most powerful yet underutilized strategies is leveraging documentation throughout your AI development workflow. This one has been a strange eye-opener for me personally, as I’ve always felt the code should be structured in a way that “documents itself” to avoid the overhead of writing and maintaining the documents - nothing is quite as frustrating as outdated documents! The reality is, for the current AI it takes a while to analyse what the code does and often high-level concepts are not apparent from the code (domain knowledge), making documentation vital for the AI to be well informed at all times. The massive flipside is, with the AI, the cost of creating and maintaining extensive documentation is negligible, making documentation an obvious choice. What we’re seeing at Ada Create is comprehensive documentation dramatically improves the productivity of the AI and serves as the "memory" for your AI assistant between sessions.

- Have the AI create lots of intermediate reports and summary files, which don't need to be fully organised or coherent documentation (you can literally just tell it “Please update or create documentation for what we have done so far”)

- AI can read project documentation very quickly and generally you won’t hit the limits of how much it can read within on chat, though you may need to at times point it at the right documentation when starting a chat

- The documents you have the AI create on the way will allow the AI to continue where it left off when starting a new chat session or even a new feature

- The AI can handle creation, consumption, and maintenance of the documents, creating a virtuous cycle

The more documentation you have, the more effective your AI assistant becomes, as each new session can quickly load relevant context from these files.

 
 

Incremental Development with Comprehensive Planning

In more practical terms, when building features with AI assistance, we have two main considerations: the technical build order and the coding cycle itself.

Let’s look at the an example of a build order for a full stack application when building with AI:

1. Outline the architecture of the solution with the AI before starting implementation

2. Implement foundational elements like data structures and models first

3. Add business logic in the middle layer

4. Build the UI layer last

AI often struggles when asked to generate too many interconnected pieces at the same time, though it can make small, obvious adjustments on its feet if needed. This layered approach improves success rates and code quality while making sure everything aligns properly with the overall architecture.


The AI-enhanced Coding Cycle

When working on any individual step of the larger plan, I can't emphasize enough how useful Git is. Cursor provides its own change management but having a proper version control in place makes a huge difference and gives you confidence to make incremental improvements you won't lose should anything go wrong (and wrong it goes more often than not with the current AI tools).

A rough outline of how we can approach development tasks:

1. Draft the task and identify required knowledge

2. Ensure documentation is comprehensive (if not, add to documentation before starting)

3. Have the AI agent have a go by giving the task overview in prompt along with all the docs ("Please add X feature for me, see the relevant docs and remember to run the tests after each addition")

4. Observe if the agent starts doing useful looking activities and handle failures through either:

  • Complete reset to last commit with improved documentation to address major issues for next run

  • Stop and quickly correct in chat for minor detours ("Remember to check the database schema")

5. After finished, or at least partially finished, review and clean up in the same chat:

  • Removing unused / commented code

  • Updating documentation

  • Adding tests

  • Creating clean commit checkpoints

It's a good idea to have all kinds of code checks and tests run periodically, either by manually interrupting the agent or giving it special emphasis in prompting. In general you can nudge the agent into the right track by reminding it of aspects you think are important during the execution of the task.


The Reality Check: Common Pitfalls

While the AI can help a lot our experience has taught us several important cautions. The AI can feel like a big unknown, and part of it is, but keep in mind that you are in control and you need to be aware of what the AI is doing - give it boundaries and help it when it gets stuck.

The developer giving the AI a hand

1. Reversion Issues

The AI is terrible at reverting changes - always manually revert when things go wrong rather than asking the AI to fix its mistakes. It's tempting to ask the AI to clean its own mess but in reality Git is your better friend!

Effective Checkpointing Strategies

To avoid wasted work, establish strong checkpointing discipline:

  • Use git commits frequently, especially before and after significant changes

  • Create commits at logical breakpoints even if the feature isn't complete

  • Run linters, type checking and unit tests before each commit to catch issues early

  • Review the AI's work periodically, especially for solutions with uncertain approaches (what looks like works doesn't always work or might have been made in questionable ways)

2. Speed vs. Quality

While coding speed increases dramatically, quality often decreases without proper oversight and ends up in monstrous pull requests. This trade-off needs careful management and you should have AI help check its own homework by writing tests and setting up static code checks. If you don't quite know what the AI did or its impact on your application, I would recommend taking a step back and implementing it in smaller, manageable pieces.

Verification Strategies for AI-Generated Code

  • Higher Abstraction Review: Review the overall architecture and approach rather than every line

  • Bounded Reviews: Create systems and boundaries that isolate changes and make reviewing impacts easier

  • Quick Sanity Checks: Develop patterns like "Only frontend files changed, TSC/lint/tests pass, no security considerations here" to quickly validate changes

  • Automated Validation: Have the AI explain its changes and run automated checks after each significant modification

3. Test Cheating

That said, we've seen that AI will sometimes try to "cheat" its way out of e.g., failed tests by modifying the tests rather than fixing the underlying code. Controls for detecting test changes are critical and you need to be in the driving seat for what is being tested.

Testing approaches and AI

  • Clear test outcomes: Follow good testing principles and make the test outcomes crystal clear and easy to understand for a human.

  • Isolated test files: Ensure the AI can't change the test files while developing, or at least add strong messaging to not do that. It means well but is sometimes overconfident in its own solution being the correct one.

  • Check test changes in PRs: Always inspect Pull Requests for test changes and be particularly careful reviewing them.


The Future of Developers

I don't have a towel to give you, but hopefully these tips and principles will help you on your journey into coding with vibes should you choose to embark on it; or give you insights if you’re already experienced. As always, use your own judgement and don't be afraid to ask questions along the way, no one has quite figured it out yet!

AI is not the first tool we have gotten to help us program, and that's what it is - a tool. While the AI may on occasion seem like a magic wand it most definitely has its limits that are eerily similar to what you would see in larger software projects and companies. The key is to view AI as both a multiplier of expert human capabilities as well as a helper to do a lot of the straightforward shovelling.

I won't sugarcoat it - the craft of programming is dramatically changing. The barrier to entry will be lowered to the point where anyone can write a simple app. However, many genuinely hard problems will remain hard, and the new world will be dipping into a lot of the old expertise. Perhaps one day the AI can "do it all and coding is over", but the day is definitely not today.


This article was written by Timo Tuominen, CTO at Ada Create, with inspiration from the innovative team at Ada Create who are pushing the boundaries of AI-assisted development daily. Special thanks to Claude-3.7 for assistance in editing the text and proofreading. Illustrations inspired by "Ghost in the Shell" 1997 film, created with the assistance of GPT-4o. All characters are fictional but the experiences that lead to write this article are real.


Next
Next

B2B Marketers Are Stuck in a Persuasion Rut - Here’s How to Break Free