All the opinions expressed in this article and on this website are entirely my own and do not represent my employer in any way.
Ever heard about the “Bus factor”? It is a concept that measures the risk of losing all knowledge about a particular thing – a software development project for example – by estimating how many team members could get crushed by a bus before nobody knows how to work on the project anymore. As an example, if 3 people on your team know how to restore a backup of your database, the Bus Factor for that particular function in 3.
From one to zero
Since the dawn of humanity, even long before buses existed, the Bus Factor has always had a “worst case” value of 1. If the sole keeper of a piece of knowledge came to pass, the knowledge was lost, unless it had been transferred previously.
And humanity has worked hard to keep itself far from this Bus Factor of 1. Brown-bag sessions, documentation, video tutorials, knowledge handovers, demos and showcases, without forgetting schools, and many more mechanisms in which an uncountable number of man-hours has been sunk.

But on the 30th of November 2022, all of this changed, and suddenly a large part of humanity became perfectly fine with a Bus Factor not just of 1, but of zero.
AI first, humans nowhere
That date corresponds to the release of ChatGPT to the public, and the start of the mass-market adoption of GenAI. It’s also the birth of what would become 3 years later the concept of “AI first”.
One might think that “AI first” would leave humans second, but, unsurprisingly, delegating the creation process to machines has instead left us nowhere to be found when it comes to knowledge keeping.
Focusing a bit on programming, it seems like a growing part of the industry is now happy to let LLMs generate functions, entire features, or even complete projects (security holes included). They have moved on from understanding their code-base and preserving this knowledge to actively trying to avoid having any piece of knowledge about their project to begin with, preferring instead to “vibe”.
Where the bus hits a wall
We can leave aside the flaws of vibe-coding and the issues with LLM-generated code in general for another article. Indeed, the quality of the generated code does not really matter here. It is obviously easier to understand code that you have never seen before if it is good code, but ultimately reading code remains much more complex than writing it no matter what.
Before LLMs, provided that your team did some of their due diligence, you could always expect to have some help when tackling new code-bases. Either a mentor, or at least some (even if maybe partially outdated) documentation. With LLMs, this is gone. The only thing you can rely on is on your ability to decipher what a highly imperfect system generated, and maybe ask explanations to that same imperfect system about your code its code (oh, and it has forgotten everything about the initial writing process by then).
Imagine having to solve bugs, add new features, patch security holes and upgrade dependencies in a piece of software that nobody on Earth has even the faintest idea about how it was built and why it was built that way.

Now, imagine being the user uploading personal documents, credit card information, private photos or thoughts to a piece of software that nobody on Earth has even the faintest idea about how it was built and why it was built that way.
Conclusion
Because of the situation of a Bus Factor of zero that it creates, vibe coding is fundamentally flawed. That is, only until there is an AI that can generate 100% accurate code 100% of the time, and it is fed 100% accurate prompts.
If vibe coding isn’t for you and you want to read more articles, check out my category dedicated to advice about learning programming.