Scratching the Vibe – The Vibe

Where the code meets the real world

Building, packaging, getting a real application running on different platforms, on different machines: that’s where your code meets the real world. As a programmer, back in the day, this is where most of my debugging hours went.

At first, using code assistants to build obsidize made it feel like this time would be different.

I discovered Clojure in 2017. The company I worked for was going through an “agile transformation” and, to motivate everyone, handed out copies of The Phoenix Project by Gene Kim. Clojure only gets a few lines in the book—but it’s the language that saves the company, and I took notice. I was already transitioning into coordination/leadership roles, and “getting Clojure” was my way to stay connected to the implementation side. I read the books, built toy apps, but I never shipped a complete, production-ready Clojure application.

In my software developer days, I deployed Java applications to production—almost exclusively back end, single platform, with an Ops team close by.

This time, the Ops team would be Claude, ChatGPT, and Gemini.

The Team

I’ve used Claude-Code since the beta phase. I “got in” at the end of January 2025 and was immediately blown away.

Claude—and most AI tools in VS Code—got a flurry of updates in the first half of August. Every two or three days the Claude-Code extension added something new: agentic mode with custom personas, security-review for uncommitted changes or CI/CD integration, etc. Cool, powerful features that often made me think whatever I did two days earlier could’ve been done easier if I’d just waited.

Occasionally, changes introduced regressions or at least odd behavior. After one update, Claude started committing and pushing changes without my approval. The security-review feature, out of the box, expects the default branch to be set (git remote set-head) and otherwise fails with a cryptic message.

There’s a lot to like about Claude-Code, but for me the killer feature is Custom MCPs—especially how clojure-mcp lets it navigate, modify, and test Clojure applications. So for most of the code development I used Claude-Code—in VS Code or the CLI—jumping out when usage limits kicked in or I wanted a second opinion, which I usually got from a specialized ChatGPT, Clojure Mentor.

For build, packaging, and deployment, I leaned on Gemini and another specialized model, DevOps ChatGPT—also because, at the beginning of August, Claude felt less capable in those areas.

Aiming high

My target was what I ask of any tool: fast, secure, reliable, easy to install, update, and use.

For a Clojure application, fast means GraalVM—melting the Java fat and startup overhead into a native image on par with C/C++. Secure meant up-to-date dependencies and vulnerability scanning. Reliable meant better logging. And easy to install/update—on macOS—means Homebrew.

My AI team supported those choices (they always do!) and gave me starter code and guidance for building the native image and a Homebrew distribution. For Windows it suggested Chocolatey—new to me, but it sounded right.

Getting the CI/deployment pipeline right for macOS (ARM) was relatively easy. Developing on my M2 Air, I could test and fix as needed. I didn’t focus on other platforms at first; I wanted a quick “walking skeleton” with the main gates and stages, then I’d backfill platform support.

The GraalVM build was trickier. I fumbled with build and initialization flags but eventually got it to compile.

In a few steps I had a Homebrew tap ready to install. Hurray!

Reality check

And then reality kicked in. The native image wouldn’t run. The flags I used made the build pass but execution fail. Cue a round of whack-a-mole, primarily with Gemini.

One interesting Gemini-in-VS-Code detail: at the beginning of August, the extension’s interaction model was chat; after an update, it introduced agentic mode. Proof of how fast assistants evolve—and a chance to feel both the pros and cons of each mode.

Before the mid-August update, chats with Gemini 2.5 Pro were fruitful and generally more accurate than other models—just a lot slower. I didn’t measure it, but it felt really slow. Still, solid answers were worth the wait.

After the August 15 release, chat began to… silently fail: after a long reflection, no answer. So I switched to the agent.

Agentic Gemini looked promising: deep VS Code integration (you see diffs as they happen, which Claude-Code later matched), MCP servers, and a similar configuration to Claude—so wiring up clojure-mcp was easy. However, it just didn’t do Clojure or Babashka well. It got stuck in parentheses hell at almost every step. Sessions ended with tragic messages like, “I’m mortified by my repeated failures…” Eventually I felt bad for it. In their (troubling?) drive to make LLMs sound human, Google had captured the awkwardness of incompetence a little too well.

I started to panic. Everything had moved so fast, and I had so many things started—so many “walking skeletons”—with only a few vacation days left. I realized “my AI team” wasn’t going to get me over the finish line alone.

Vacations are too short

The leisurely vacation development phase had to end. I began waking up earlier for uninterrupted work before 8 a.m., skipping countryside trips and extended family visits to squeeze in a few more hours each day.

At that point, shipping was about testing, running, iterating—not generating more lines of code.

The failing native image led me to add a post-packaging end-to-end validation stage, to catch these issues earlier.

After a few tries, the culprit seemed to be the Cheshire JSON library. With no better theory, I switched to the standard clojure.data.json. Unit, integration, and E2E tests made the change a matter of minutes—but it didn’t fully resolve the native-image issues.

All the while I looped through ChatGPT, Claude, Gemini, and old-school Googling: find plausible theories, try fixes, check whether anything improved. This isn’t unique to LLMs—developers have done this for decades to ship in ecosystems they don’t fully control. If it works, it works.

Finally, I got my first successful Homebrew install and full import on macOS ARM. It seemed feasible again to finish before vacation ended.

Then I added Windows and Linux packaging—and everything failed again. I told myself it was a platform access issue, bought a Parallels Desktop license… and then remembered I’m on ARM trying to target x86. Not going to work.

LLMs gave me speed, but the bottleneck was the real world: deployment time, runtime, access to hardware. Without x86 macOS, Linux, or Windows boxes, I cut scope: native image Homebrew package for macOS ARM (my main platform), and an executable image via jlink for other platforms. Even that was painful on Windows, where builds kept failing and GPTs kept getting confused by basic shell automation. Final plan: native image for macOS ARM (done), jlink package (JRE + JAR) for macOS x64 and Linux arm64, and just the JAR for Windows. Time to stop and get back to reality.

Release It!

Stopping is harder than it sounds in the world of code assistants. When tools get more powerful every day—and solve problems that look as hard as the ones left—it always feels like the happy ending is one prompt away. But our complexity isn’t LLM complexity, and vacations are short.

So I stopped. I polished the code, ran multiple reviews, improved the docs. The functionality was good enough for me. I’d done what I set out to do at the beginning of those two weeks. And somewhere along the way, I learned to stop worrying and love to vibe.

Leave a Reply

Your email address will not be published. Required fields are marked *