Scratching the Vibe – How I Learned to Stop Worrying and Love the Vibe

Thirty years ago, Eric S. Raymond was experimenting with a new way of building software.

The Linux way disrupted the software industry: massively collaborative, powered by the (mostly) unpaid contributions of coders all over the world. It multiplied possibilities and shocked everyone when it proved viable for one of the most complex software projects imaginable at the time: building an enterprise-grade operating system—free, both as in beer and as in freedom.

Raymond’s essay, The Cathedral and the Bazaar, distilled this experience into “the Linux method,” catalysing the open-source movement and a software development approach that is estimated to have created more than $8.8 trillion in value. That code powers much of today’s digital economy and paved the road for the spectacular rise of the software companies now dominating Wall Street.

Once again, we’re standing at the edge of a paradigm shift—this time powered by the transformer architecture and massive investment in large language models. And, as Raymond did three decades ago—though in my own smaller, humbler way—when I felt an unexpected technical itch earlier this year, I decided to experiment with these new tools and this new way of building software.

This is the story of obsidize and what I learned working on it, and it all started with a simple but annoying itch

Scratching the Vibe – The Itch

The summer of 2025 was hot—your local weather stats will probably confirm it (thanks, global warming!)—but for me it was especially intense at work, with delivery after delivery, milestone after milestone.

So when I finally started my vacation, I felt the need to re-map where I was. I had to tidy up the things that had accumulated around me while I ran back and forth between work, kids, and more work.


It’s probably no surprise that today, when we “check our environment,” one of the first places we look is our digital space: notes, bookmarks, appointments, personal projects. For me, Obsidian is the system that lets me navigate all this in a way that gives me the illusion—sometimes even the reality—of being in control.

I’ve used Obsidian as my main knowledge management system for two years. Before that, I always used “something” (usually several things at once) to make sense of the endless stream of data from cyberspace: Pocket notes, Reader saves, Kindle highlights. Obsidian, with its vast plugin ecosystem, finally promised to consolidate all that into a usable, productive (?) ecosystem.

Around the same time, I started playing with large language models. At first, it was just testing: could ChatGPT give me a quick summary? But soon it became reflex: start with a dialog, get the lay of the land, then dig deeper into what looked interesting.

Over the last year, I’ve leaned more on Claude from Anthropic—mainly for Claude-Code, but also as a counterpoint to ChatGPT. I like having multiple sources before zooming in on a strategy. The result was an ever-growing pile of notes, conversations, and projects dangling outside my “second brain.” Time to clean that up.

ChatGPT was easy. Community plugins already handled syncing conversations with Obsidian. But I was surprised to find Claude unsupported. (To be clear: from the start I excluded browser extensions and other intrusive options that want wide access to my system.)

Digging further, I discovered that Claude had only recently introduced an export feature, driven by GDPR compliance. Anthropic’s help documentation suggests it was in development as of mid-2024. One small win for EU regulations.

At first, using ChatGPT, Claude, Grok—whatever—felt gimmicky. Like nothing of importance could really accumulate there. But reality disagreed. These chats became central to how I consume and create information. My decisions, my interests, my thinking all sedimented in the chat history of LLMs.

So while I began looking for a quick copy-paste solution to consolidate my notes, the deeper I dug, the more important it became to make this data fully mine, integrated into the system I already use and control.

The good news: the data are available. The bad news: they come as JSON—hardly usable as-is, especially in Obsidian. At first, it looked like I’d have to wait for someone to write a plugin to tidy this corner of my digital self.

But then I thought: wouldn’t it be fair to ask Claude to help me take full ownership of my discussions with Claude?

That itch wouldn’t let me go. And so my vacation turned into a trip into the world of vibe coding: The Scratch.

Scratching the Vibe – The Scratch

To build obsidize I started with a simple—or at least straightforward—problem: convert data stored in JSON files to Markdown.

On paper, conversion between standard formats should be one of the strong points of LLMs. In practice, a direct request to Claude or ChatGPT exposed a familiar disadvantage: their wordy answers and tight context limits. The exported files from Claude were simply too big for the standard web interfaces.

And it wasn’t going to be a one-off operation. I’d keep using Claude, creating new conversations, extending old ones. I didn’t want to manually manage exports each time.

So my scope grew. What I actually needed was a tool that could:

  1. Convert JSON data to Markdown files.
  2. Update existing conversations/projects in Markdown with new content, without overwriting edits made directly in Obsidian.

Parsers are supposed to be another strength of LLMs—but the context window ruled out Claude and ChatGPT for this first stage.

That’s when I turned to Google’s Gemini 2.5 Pro. With its huge context window, Gemini had no problem accepting an entire conversation file and proposing a parser in Clojure/Babashka that ran successfully on the first go. Same for the projects file. This is Gemini’s real advantage: not something impossible for other GPTs, especially in agent mode with MCP, but far easier when you just need a quick, working answer.

Now I had two tools/scripts doing essentially the same thing—converting JSON to Markdown—but no update logic.

So I decided to simplify first: solve the conversion problem cleanly before tackling updates.

I spun up a standard Clojure project structure (with deps-new), ported Gemini’s Babashka scripts into namespaces, and asked Claude-Code to turn it into a CLI application.

We discussed the tech stack. For JSON parsing, Claude suggested (and I accepted) Cheshire, a well-established, high-performance library built on Jackson.

Then came the real challenge: implementation choices.


Copilots tend to overdo it. They generate too many options, and every option looks equally viable. The risk is saying “yes” too often and ending up in a swamp of half-working approaches. If you already know the right solution, you can use AI just to speed up coding—but that underuses the tools. If, like me, you often don’t know the right approach, you need to push back and test constantly. Otherwise you get caught in endless loops of “ah, I see the error / the issue is still there.”

With Gemini’s proof-of-concept already working, I just needed to refactor and add a few key features:

  1. Detect input as either an archive (zip/dms) or folder.
  2. Tag and link imported notes for Obsidian.
  3. Most importantly, detect already-imported notes and update them without overwriting edits in Obsidian.

At this point, what started as “tidying up my notes” turned into a small personal quest. I wanted to scratch my own itch, but also to try this new way of building software with new tools. Luckily, my vacation plans were simple: kids, grandparents, clean air, fresh food, plenty of sun.

After about two hours of effective work—spread over three mornings—I had a working Clojure project. Not packaged, but functional. It had documentation, unit tests, and just enough features to be useful.


One incidental advantage of working in VS Code with Gemini and Claude-Code: context switching was painless. Every morning, I could skim the conversation history, remember where we left off, and pick up seamlessly. When the kids woke up, I’d close the lid and shift gears. The next morning, my virtual coworkers were still waiting, ready to continue as if nothing had happened.

With the basics so easy to conjure with digital assistants—and mornings still left in my vacation—I felt confident enough to push toward a real app. The linter and automated tests were already there, but I wanted more:

  • Add antq to ensure up-to-date libraries.
  • Add Trivy for vulnerability and license scanning.
  • Build native images for macOS and Linux using GraalVM.
  • Make it installable via Homebrew.

That last step—going from project to product—turned out to be the hardest.

To be continued: The Vibe

Scratching the Vibe – The Vibe

Where the code meets the real world

Building, packaging, getting a real application running on different platforms, on different machines: that’s where your code meets the real world. As a programmer, back in the day, this is where most of my debugging hours went.

At first, using code assistants to build obsidize made it feel like this time would be different.

I discovered Clojure in 2017. The company I worked for was going through an “agile transformation” and, to motivate everyone, handed out copies of The Phoenix Project by Gene Kim. Clojure only gets a few lines in the book—but it’s the language that saves the company, and I took notice. I was already transitioning into coordination/leadership roles, and “getting Clojure” was my way to stay connected to the implementation side. I read the books, built toy apps, but I never shipped a complete, production-ready Clojure application.

In my software developer days, I deployed Java applications to production—almost exclusively back end, single platform, with an Ops team close by.

This time, the Ops team would be Claude, ChatGPT, and Gemini.

The Team

I’ve used Claude-Code since the beta phase. I “got in” at the end of January 2025 and was immediately blown away.

Claude—and most AI tools in VS Code—got a flurry of updates in the first half of August. Every two or three days the Claude-Code extension added something new: agentic mode with custom personas, security-review for uncommitted changes or CI/CD integration, etc. Cool, powerful features that often made me think whatever I did two days earlier could’ve been done easier if I’d just waited.

Occasionally, changes introduced regressions or at least odd behavior. After one update, Claude started committing and pushing changes without my approval. The security-review feature, out of the box, expects the default branch to be set (git remote set-head) and otherwise fails with a cryptic message.

There’s a lot to like about Claude-Code, but for me the killer feature is Custom MCPs—especially how clojure-mcp lets it navigate, modify, and test Clojure applications. So for most of the code development I used Claude-Code—in VS Code or the CLI—jumping out when usage limits kicked in or I wanted a second opinion, which I usually got from a specialized ChatGPT, Clojure Mentor.

For build, packaging, and deployment, I leaned on Gemini and another specialized model, DevOps ChatGPT—also because, at the beginning of August, Claude felt less capable in those areas.

Aiming high

My target was what I ask of any tool: fast, secure, reliable, easy to install, update, and use.

For a Clojure application, fast means GraalVM—melting the Java fat and startup overhead into a native image on par with C/C++. Secure meant up-to-date dependencies and vulnerability scanning. Reliable meant better logging. And easy to install/update—on macOS—means Homebrew.

My AI team supported those choices (they always do!) and gave me starter code and guidance for building the native image and a Homebrew distribution. For Windows it suggested Chocolatey—new to me, but it sounded right.

Getting the CI/deployment pipeline right for macOS (ARM) was relatively easy. Developing on my M2 Air, I could test and fix as needed. I didn’t focus on other platforms at first; I wanted a quick “walking skeleton” with the main gates and stages, then I’d backfill platform support.

The GraalVM build was trickier. I fumbled with build and initialization flags but eventually got it to compile.

In a few steps I had a Homebrew tap ready to install. Hurray!

Reality check

And then reality kicked in. The native image wouldn’t run. The flags I used made the build pass but execution fail. Cue a round of whack-a-mole, primarily with Gemini.

One interesting Gemini-in-VS-Code detail: at the beginning of August, the extension’s interaction model was chat; after an update, it introduced agentic mode. Proof of how fast assistants evolve—and a chance to feel both the pros and cons of each mode.

Before the mid-August update, chats with Gemini 2.5 Pro were fruitful and generally more accurate than other models—just a lot slower. I didn’t measure it, but it felt really slow. Still, solid answers were worth the wait.

After the August 15 release, chat began to… silently fail: after a long reflection, no answer. So I switched to the agent.

Agentic Gemini looked promising: deep VS Code integration (you see diffs as they happen, which Claude-Code later matched), MCP servers, and a similar configuration to Claude—so wiring up clojure-mcp was easy. However, it just didn’t do Clojure or Babashka well. It got stuck in parentheses hell at almost every step. Sessions ended with tragic messages like, “I’m mortified by my repeated failures…” Eventually I felt bad for it. In their (troubling?) drive to make LLMs sound human, Google had captured the awkwardness of incompetence a little too well.

I started to panic. Everything had moved so fast, and I had so many things started—so many “walking skeletons”—with only a few vacation days left. I realized “my AI team” wasn’t going to get me over the finish line alone.

Vacations are too short

The leisurely vacation development phase had to end. I began waking up earlier for uninterrupted work before 8 a.m., skipping countryside trips and extended family visits to squeeze in a few more hours each day.

At that point, shipping was about testing, running, iterating—not generating more lines of code.

The failing native image led me to add a post-packaging end-to-end validation stage, to catch these issues earlier.

After a few tries, the culprit seemed to be the Cheshire JSON library. With no better theory, I switched to the standard clojure.data.json. Unit, integration, and E2E tests made the change a matter of minutes—but it didn’t fully resolve the native-image issues.

All the while I looped through ChatGPT, Claude, Gemini, and old-school Googling: find plausible theories, try fixes, check whether anything improved. This isn’t unique to LLMs—developers have done this for decades to ship in ecosystems they don’t fully control. If it works, it works.

Finally, I got my first successful Homebrew install and full import on macOS ARM. It seemed feasible again to finish before vacation ended.

Then I added Windows and Linux packaging—and everything failed again. I told myself it was a platform access issue, bought a Parallels Desktop license… and then remembered I’m on ARM trying to target x86. Not going to work.

LLMs gave me speed, but the bottleneck was the real world: deployment time, runtime, access to hardware. Without x86 macOS, Linux, or Windows boxes, I cut scope: native image Homebrew package for macOS ARM (my main platform), and an executable image via jlink for other platforms. Even that was painful on Windows, where builds kept failing and GPTs kept getting confused by basic shell automation. Final plan: native image for macOS ARM (done), jlink package (JRE + JAR) for macOS x64 and Linux arm64, and just the JAR for Windows. Time to stop and get back to reality.

Release It!

Stopping is harder than it sounds in the world of code assistants. When tools get more powerful every day—and solve problems that look as hard as the ones left—it always feels like the happy ending is one prompt away. But our complexity isn’t LLM complexity, and vacations are short.

So I stopped. I polished the code, ran multiple reviews, improved the docs. The functionality was good enough for me. I’d done what I set out to do at the beginning of those two weeks. And somewhere along the way, I learned to stop worrying and love to vibe.

Learning to Stop Worrying and Love the Vibe


The post is a reflection based on building obsidize, I wrote more about it in:

  1. Scratching the Vibe – How I Learned to Stop Worrying and Love the Vibe
  2. Scratching the Vibe – The Itch
  3. Scratching the Vibe – The Scratch
  4. Scratching the Vibe – The Vibe

I’ve been fascinated by computers—and what they can be taught to do—since my childhood in the ’80s. It always seemed like with the right secret incantation you could open portals to new worlds, new possibilities; you could gain superpowers if only you knew how to conjure the magic.

I wanted this power, and I admired those who had it. It seemed a skill difficult to master, a skill arriving from the future, a skill the grown-ups around me struggled with—which made it all the more desirable.


My first programming experiences came in the early 1990s on clones of the Z80/ZX Spectrum. I used BASIC to draw and animate simple geometric shapes, or later to write (well, copy) games published as text listings in the back of PC Report magazine.

My high school diploma project (Assistant Programming Analyst) was a FoxPro clone of Norton Commander. My engineering diploma project was a C/C++ disk defragmentation application.

FoxPro, disk defragmentation, BASIC (not to mention Pascal, MFC, CORBA, and others): I watched each of these fade into obsolescence during my career. Yet in a way, the essence of programming didn’t change. That first code I wrote 35 years ago—reading input, initializing state, controlling flow, calling library functions—that was programming. That was software. Until 2022.


By then, my understanding of what makes software successful had already shifted. While writing code remained the only way to “teach your computer to do something,” the viability of that “something”—its relevance, cost, and chance of survival—was less about coding and more about capturing real business needs. It was about growing solutions that could evolve with a changing domain, team, or organization (or better yet, growing the right team to build the right solution). I’ve seen too many thousands of lines of clean, elegant code wasted by bad requirements, misaligned business goals, dysfunctional organizations, clueless management. In that world, being “code complete” was just one small step.

Software engineering is about solving real problems with software, in the most cost-effective way possible. And in the projects I worked, the main problems were rarely the code itself. I found myself focusing more on process, specification, and communication.

And then “teaching your computer to do something” changed.


Andrej Karpathy coined the term “vibe coding” to describe letting large language models drive implementation. He called it “fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists.” In this style, the “assistant” is no longer just an assistant: it builds functionality, proposes architectures, suggests next features. The human supplies guidance, feedback, and high-level direction.

On obsidize, I set out to lean on coding assistants as far as they could go. And they can go surprisingly far. By the end, although I spent many hours testing and debugging, I can’t honestly say I’ve read every line of code in the repository.

Of course, I ran black-box end-to-end tests—the “acceptance testing.” I read and reworked the documentation the assistants generated. I zoomed in on specific namespaces when needed, wrote some code, requested features, made architectural decisions. I decided what to implement and when to stop. But I didn’t read every line. I didn’t write (or review) every test.

Does that make me just a responsible babysitter for the code assistants? Was I just vibing, or was I doing proper “AI-assisted engineering”?

In human teams, you’re not expected to read all the code either—not in large projects. You trust your teammates, and that trust is reinforced with conventions, reviews, tests, demos, documentation. Code assistants are new, and the level of implicit trust they deserve is still being defined. But similar checks are emerging in the LLM world: multiple personas to review code from different angles (something Claude-Code already implements), security reviews, chained tasks, and structured implementation phases.

Using code assistants means you’re not only writing code—but you’re not a pure product owner either. Oversight is tighter. You manage direction, but you must also stay close to the code, watch the repo carefully (ready to revert an agent’s mess), validate changes. And unlike with humans, you must be willing to discard huge amounts of work—hundreds of lines, entire modules. Code is cheap now. Agents don’t take ownership of their mistakes. You can’t let sloppy code slip by hoping someone else will fix it later. It’s better to throw it away, redefine the task, and go again. The agent won’t be offended.

That raises another question: if code is cheap, if agents write most of it, does the choice of programming language matter? I think it does. You still need to read it, debug it, play with it when the AI stalls in its stochastic echo chamber. Languages that are well-designed, consistent, and concise may gain an edge. Clojure, for example, is known for brevity (good for context windows and tokens), functional purity, and explicit state handling. And with the Clojure-MCP server, Claude can interact directly with a live REPL—iterating like an experienced developer.


Are coding assistants silver bullets for software development?

It depends on the monster you’re trying to slay.

If you need to churn code faster, or tackle technologies outside your core competence, then yes: they help. I had never finished a production-ready personal project before. I always got stuck on deployment and packaging, lost interest when forced to learn some necessary but boring technology. With AI teammates, I got further than ever.

For companies that see software as a manufacturing task—cranking out features and polishing wheels—assistants will boost productivity. More stuff will be built.

For companies in novel domains, where constraints come from the business environment, assistants can accelerate iteration, get you to working prototypes, let you test hypotheses quickly. Not a silver bullet, but a powerful tool.


In the late ’90s, Eric S. Raymond experimented with a then-new paradigm: the Linux model, open source, massively collaborative, free. That movement powered the cloud and SaaS revolutions. At the time, critics worried free software would kill developer jobs, handing value to corporations. And maybe it did: those $8.8 trillion in open-source software didn’t land in contributors’ pockets, but they did lower corporate costs. At the same time, free software spawned countless small companies that didn’t need to reinvent the wheel.

Did open source create opportunities for software engineers, or take them away? In the early 2000s, you were still expected to implement your own circular buffers, even your own network stack. Today, most software development is an integration exercise of open-source standards. Writing from scratch is a “code smell.” The building blocks changed.

Now the granularity shifts again. Coding assistants—trained on hundreds of millions of open-source lines—are the new layer of abstraction. They integrate the standard industry tools for you, ready to use.

Fed by free and open-source software, we are approaching an uncanny version of Richard Stallman’s vision: software nearly free as in beer, but not free as in freedom. Everyone can create software. Few get paid for it. The agribusiness of code, controlled by a handful of AI companies.

Some see this as the end of the software engineer. I don’t. Those critics have always underestimated what engineering really is. Software engineering has always been about more than code. And maybe, now that code itself is less central, the other parts of the craft—problem framing, communication, iteration, validation—will finally get their due.