
The post is a reflection based on building obsidize, I wrote more about it in:
- Scratching the Vibe – How I Learned to Stop Worrying and Love the Vibe
- Scratching the Vibe – The Itch
- Scratching the Vibe – The Scratch
- Scratching the Vibe – The Vibe
I’ve been fascinated by computers—and what they can be taught to do—since my childhood in the ’80s. It always seemed like with the right secret incantation you could open portals to new worlds, new possibilities; you could gain superpowers if only you knew how to conjure the magic.
I wanted this power, and I admired those who had it. It seemed a skill difficult to master, a skill arriving from the future, a skill the grown-ups around me struggled with—which made it all the more desirable.
My first programming experiences came in the early 1990s on clones of the Z80/ZX Spectrum. I used BASIC to draw and animate simple geometric shapes, or later to write (well, copy) games published as text listings in the back of PC Report magazine.
My high school diploma project (Assistant Programming Analyst) was a FoxPro clone of Norton Commander. My engineering diploma project was a C/C++ disk defragmentation application.
FoxPro, disk defragmentation, BASIC (not to mention Pascal, MFC, CORBA, and others): I watched each of these fade into obsolescence during my career. Yet in a way, the essence of programming didn’t change. That first code I wrote 35 years ago—reading input, initializing state, controlling flow, calling library functions—that was programming. That was software. Until 2022.
By then, my understanding of what makes software successful had already shifted. While writing code remained the only way to “teach your computer to do something,” the viability of that “something”—its relevance, cost, and chance of survival—was less about coding and more about capturing real business needs. It was about growing solutions that could evolve with a changing domain, team, or organization (or better yet, growing the right team to build the right solution). I’ve seen too many thousands of lines of clean, elegant code wasted by bad requirements, misaligned business goals, dysfunctional organizations, clueless management. In that world, being “code complete” was just one small step.
Software engineering is about solving real problems with software, in the most cost-effective way possible. And in the projects I worked, the main problems were rarely the code itself. I found myself focusing more on process, specification, and communication.
And then “teaching your computer to do something” changed.
Andrej Karpathy coined the term “vibe coding” to describe letting large language models drive implementation. He called it “fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists.” In this style, the “assistant” is no longer just an assistant: it builds functionality, proposes architectures, suggests next features. The human supplies guidance, feedback, and high-level direction.
On obsidize, I set out to lean on coding assistants as far as they could go. And they can go surprisingly far. By the end, although I spent many hours testing and debugging, I can’t honestly say I’ve read every line of code in the repository.
Of course, I ran black-box end-to-end tests—the “acceptance testing.” I read and reworked the documentation the assistants generated. I zoomed in on specific namespaces when needed, wrote some code, requested features, made architectural decisions. I decided what to implement and when to stop. But I didn’t read every line. I didn’t write (or review) every test.
Does that make me just a responsible babysitter for the code assistants? Was I just vibing, or was I doing proper “AI-assisted engineering”?
In human teams, you’re not expected to read all the code either—not in large projects. You trust your teammates, and that trust is reinforced with conventions, reviews, tests, demos, documentation. Code assistants are new, and the level of implicit trust they deserve is still being defined. But similar checks are emerging in the LLM world: multiple personas to review code from different angles (something Claude-Code already implements), security reviews, chained tasks, and structured implementation phases.
Using code assistants means you’re not only writing code—but you’re not a pure product owner either. Oversight is tighter. You manage direction, but you must also stay close to the code, watch the repo carefully (ready to revert an agent’s mess), validate changes. And unlike with humans, you must be willing to discard huge amounts of work—hundreds of lines, entire modules. Code is cheap now. Agents don’t take ownership of their mistakes. You can’t let sloppy code slip by hoping someone else will fix it later. It’s better to throw it away, redefine the task, and go again. The agent won’t be offended.
That raises another question: if code is cheap, if agents write most of it, does the choice of programming language matter? I think it does. You still need to read it, debug it, play with it when the AI stalls in its stochastic echo chamber. Languages that are well-designed, consistent, and concise may gain an edge. Clojure, for example, is known for brevity (good for context windows and tokens), functional purity, and explicit state handling. And with the Clojure-MCP server, Claude can interact directly with a live REPL—iterating like an experienced developer.
Are coding assistants silver bullets for software development?

It depends on the monster you’re trying to slay.
If you need to churn code faster, or tackle technologies outside your core competence, then yes: they help. I had never finished a production-ready personal project before. I always got stuck on deployment and packaging, lost interest when forced to learn some necessary but boring technology. With AI teammates, I got further than ever.
For companies that see software as a manufacturing task—cranking out features and polishing wheels—assistants will boost productivity. More stuff will be built.
For companies in novel domains, where constraints come from the business environment, assistants can accelerate iteration, get you to working prototypes, let you test hypotheses quickly. Not a silver bullet, but a powerful tool.
In the late ’90s, Eric S. Raymond experimented with a then-new paradigm: the Linux model, open source, massively collaborative, free. That movement powered the cloud and SaaS revolutions. At the time, critics worried free software would kill developer jobs, handing value to corporations. And maybe it did: those $8.8 trillion in open-source software didn’t land in contributors’ pockets, but they did lower corporate costs. At the same time, free software spawned countless small companies that didn’t need to reinvent the wheel.
Did open source create opportunities for software engineers, or take them away? In the early 2000s, you were still expected to implement your own circular buffers, even your own network stack. Today, most software development is an integration exercise of open-source standards. Writing from scratch is a “code smell.” The building blocks changed.
Now the granularity shifts again. Coding assistants—trained on hundreds of millions of open-source lines—are the new layer of abstraction. They integrate the standard industry tools for you, ready to use.
Fed by free and open-source software, we are approaching an uncanny version of Richard Stallman’s vision: software nearly free as in beer, but not free as in freedom. Everyone can create software. Few get paid for it. The agribusiness of code, controlled by a handful of AI companies.
Some see this as the end of the software engineer. I don’t. Those critics have always underestimated what engineering really is. Software engineering has always been about more than code. And maybe, now that code itself is less central, the other parts of the craft—problem framing, communication, iteration, validation—will finally get their due.
