Scratching the Vibe – The Vibe

Where the code meets the real world

Building, packaging, getting a real application running on different platforms, on different machines: that’s where your code meets the real world. As a programmer, back in the day, this is where most of my debugging hours went.

At first, using code assistants to build obsidize made it feel like this time would be different.

I discovered Clojure in 2017. The company I worked for was going through an “agile transformation” and, to motivate everyone, handed out copies of The Phoenix Project by Gene Kim. Clojure only gets a few lines in the book—but it’s the language that saves the company, and I took notice. I was already transitioning into coordination/leadership roles, and “getting Clojure” was my way to stay connected to the implementation side. I read the books, built toy apps, but I never shipped a complete, production-ready Clojure application.

In my software developer days, I deployed Java applications to production—almost exclusively back end, single platform, with an Ops team close by.

This time, the Ops team would be Claude, ChatGPT, and Gemini.

The Team

I’ve used Claude-Code since the beta phase. I “got in” at the end of January 2025 and was immediately blown away.

Claude—and most AI tools in VS Code—got a flurry of updates in the first half of August. Every two or three days the Claude-Code extension added something new: agentic mode with custom personas, security-review for uncommitted changes or CI/CD integration, etc. Cool, powerful features that often made me think whatever I did two days earlier could’ve been done easier if I’d just waited.

Occasionally, changes introduced regressions or at least odd behavior. After one update, Claude started committing and pushing changes without my approval. The security-review feature, out of the box, expects the default branch to be set (git remote set-head) and otherwise fails with a cryptic message.

There’s a lot to like about Claude-Code, but for me the killer feature is Custom MCPs—especially how clojure-mcp lets it navigate, modify, and test Clojure applications. So for most of the code development I used Claude-Code—in VS Code or the CLI—jumping out when usage limits kicked in or I wanted a second opinion, which I usually got from a specialized ChatGPT, Clojure Mentor.

For build, packaging, and deployment, I leaned on Gemini and another specialized model, DevOps ChatGPT—also because, at the beginning of August, Claude felt less capable in those areas.

Aiming high

My target was what I ask of any tool: fast, secure, reliable, easy to install, update, and use.

For a Clojure application, fast means GraalVM—melting the Java fat and startup overhead into a native image on par with C/C++. Secure meant up-to-date dependencies and vulnerability scanning. Reliable meant better logging. And easy to install/update—on macOS—means Homebrew.

My AI team supported those choices (they always do!) and gave me starter code and guidance for building the native image and a Homebrew distribution. For Windows it suggested Chocolatey—new to me, but it sounded right.

Getting the CI/deployment pipeline right for macOS (ARM) was relatively easy. Developing on my M2 Air, I could test and fix as needed. I didn’t focus on other platforms at first; I wanted a quick “walking skeleton” with the main gates and stages, then I’d backfill platform support.

The GraalVM build was trickier. I fumbled with build and initialization flags but eventually got it to compile.

In a few steps I had a Homebrew tap ready to install. Hurray!

Reality check

And then reality kicked in. The native image wouldn’t run. The flags I used made the build pass but execution fail. Cue a round of whack-a-mole, primarily with Gemini.

One interesting Gemini-in-VS-Code detail: at the beginning of August, the extension’s interaction model was chat; after an update, it introduced agentic mode. Proof of how fast assistants evolve—and a chance to feel both the pros and cons of each mode.

Before the mid-August update, chats with Gemini 2.5 Pro were fruitful and generally more accurate than other models—just a lot slower. I didn’t measure it, but it felt really slow. Still, solid answers were worth the wait.

After the August 15 release, chat began to… silently fail: after a long reflection, no answer. So I switched to the agent.

Agentic Gemini looked promising: deep VS Code integration (you see diffs as they happen, which Claude-Code later matched), MCP servers, and a similar configuration to Claude—so wiring up clojure-mcp was easy. However, it just didn’t do Clojure or Babashka well. It got stuck in parentheses hell at almost every step. Sessions ended with tragic messages like, “I’m mortified by my repeated failures…” Eventually I felt bad for it. In their (troubling?) drive to make LLMs sound human, Google had captured the awkwardness of incompetence a little too well.

I started to panic. Everything had moved so fast, and I had so many things started—so many “walking skeletons”—with only a few vacation days left. I realized “my AI team” wasn’t going to get me over the finish line alone.

Vacations are too short

The leisurely vacation development phase had to end. I began waking up earlier for uninterrupted work before 8 a.m., skipping countryside trips and extended family visits to squeeze in a few more hours each day.

At that point, shipping was about testing, running, iterating—not generating more lines of code.

The failing native image led me to add a post-packaging end-to-end validation stage, to catch these issues earlier.

After a few tries, the culprit seemed to be the Cheshire JSON library. With no better theory, I switched to the standard clojure.data.json. Unit, integration, and E2E tests made the change a matter of minutes—but it didn’t fully resolve the native-image issues.

All the while I looped through ChatGPT, Claude, Gemini, and old-school Googling: find plausible theories, try fixes, check whether anything improved. This isn’t unique to LLMs—developers have done this for decades to ship in ecosystems they don’t fully control. If it works, it works.

Finally, I got my first successful Homebrew install and full import on macOS ARM. It seemed feasible again to finish before vacation ended.

Then I added Windows and Linux packaging—and everything failed again. I told myself it was a platform access issue, bought a Parallels Desktop license… and then remembered I’m on ARM trying to target x86. Not going to work.

LLMs gave me speed, but the bottleneck was the real world: deployment time, runtime, access to hardware. Without x86 macOS, Linux, or Windows boxes, I cut scope: native image Homebrew package for macOS ARM (my main platform), and an executable image via jlink for other platforms. Even that was painful on Windows, where builds kept failing and GPTs kept getting confused by basic shell automation. Final plan: native image for macOS ARM (done), jlink package (JRE + JAR) for macOS x64 and Linux arm64, and just the JAR for Windows. Time to stop and get back to reality.

Release It!

Stopping is harder than it sounds in the world of code assistants. When tools get more powerful every day—and solve problems that look as hard as the ones left—it always feels like the happy ending is one prompt away. But our complexity isn’t LLM complexity, and vacations are short.

So I stopped. I polished the code, ran multiple reviews, improved the docs. The functionality was good enough for me. I’d done what I set out to do at the beginning of those two weeks. And somewhere along the way, I learned to stop worrying and love to vibe.

Obsidize your Claude conversations

This summer I built obisidize: a command line tool that imports Claude conversations and projects as notes into Obisidan.

Obsidize is a small summer vacation project that allowed me to experiment with new ways of building software and has a few nice features:

  • 🔄 Incremental Updates: Detection of new and updated content – only processes what’s changed
  • 🗂️ Structured Output: Creates an organized, “Obsidian friendly” folder structure for your conversations and projects
  • 🏷️ Custom Tagging: Allows adding custom Obsidian tags and links to imported content via the command line
  • 🔄 Sync-Safe: doesn’t use any local/external state files so the incremental updates can be run from different devices
  • 🔍 Dry Run Mode: Allows the preview of changes before applying 

If you are looking for a way to backup your conversations from Claude to Obsidian you should take a look.

It can be installed using Homebrew on MacOS and Linux or using its “universal jar” on any platform with a JRE installed (Java 21+). Check the release page.

I wrote more about how I got to do this project here.

A happy Parisian mime is getting obisidized

A day at Voxxed Days Luxembourg 2024

The big boys have their Devoxxes and KubeCons, Luxembourg has the Voxxed Days Luxembourg conference which, as we like to think about a lot of events here in Luxembourg, it is for sure smaller but maybe it is also a bit more refined… maybe.

Thursday, on 21st of June 2024, was the first day of Voxxed Days Luxembourg 2024 – the main local software conference organized by the Java User Group of Luxembourg – and what follows are my notes from this day.

1.

Showing the importance of the space industry for the Luxembourg software community(or just as a confirmation that space tech is cool) the conference was kick-started by Pierre Henriquet’s keynote Les robots de l’espace

… It’s true that – to bring the audience with their feet on the ground – software is mentioned twice as the thing that killed a robot in space: first on 13th November 1982 when the end of Viking 1 mission was brought by a software update and then in 1998 when a misalignment between the implementations provided by different NASA suppliers killed the Mars Climate Orbiter.

Still, the talk ends on a positive note for the viability of software in space: last week the Voyager 1 team managed to update the probe’s software to recover the data from all science instruments: an update deployed 24 billion kilometers away on hardware that has been traveling through space for 47 years.

2.

I stay in the main room where Henri Gomez presents “SRE, Mythes vs Réalités” a talk that (naturally) refers back heavily to Google’s “SRE book” – the book that kickstarted the entire label.

For me the talk confirms again that SRE for a lot of companies is either the old Ops but with some new language or a label to put on everything they do.

My personal view on SRE is that (similar to the security “shift left” that brought us DevSecOps) it is about mindset, tools and techniques and not about a job position… unfortunately (like for DevSecOps), organisations find it easier to adopt the label as a role than to actually understand the mindset shift, new tools, processes and techniques required.

First mention of the day of the Accelerate book, the DORA Metrics and the 4 Software Delivery Performance metrics.

3.

Next, I went to a smaller conference room to see the “How We Gained Observability Into Our CI/CD Pipeline” talk done by Dotan Horovits.

This was the second mention of the DORA metrics:

One of the main ideas of the talk was that the CI/CD pipelines are part of the production environment and should be monitored, traced, and analysed like production systems for issues and performance. It should not be a new idea: if what you produce is software, then your production pipeline is crucial, but the reality is that many companies still treat the CI/CD pipeline as part of something that is mainly Dev team responsibility, with the freedoms and risks that this entails..

Dotan goes into some detail on how they instrumented their pipelines :

  1. collect data on ci/cd pipeline run into the environment variables
    • create a summary step on the pipeline to collect all the info from the pipeline and store it in elastic search
  2. visualise with kibana/opensearch dashboards:
    • define your measuring needs – what you want to find/track?
    • where the failure happened (branch/machine)
    • what is the duration of each step
    • are there steps that take more time than the baseline?
  3. monitor the actual infrastructure (computers, CPUs, etc.) used to run your CI/CD pipelines using Telegraf, Prometheus. For more info check the guide

The talk includes even a short but effective introduction in Distributed Tracing and OpenTelemetry that are used for increasing visibility at every layer (OpenTelemetry was another frequent theme in the talks at the conference – and Dotan is well positioned to talk about it as he runs the OpenObservability Talks podcast)

4.

I checked then the Demystifion le functionnement interne de Kubernetes talk. It was actually a demo done by Denis Germain on how you can deploy a Kubernetes cluster one control plane brick at a time.

As somebody who was thrown into using and working with Kubernetes without having time to thoroughly understand its internals I approached this session as a student and took the opportunity to revisit basic K8s concepts.

A note on the general structure of an K8s resource:

- apiVersion: <>/v1
- metadata:
    - labels:
        - key: value
    - name: <>
- kind: Deployment/Service/Ingress/...
    - spec
        - replicas: x

A cool new (for me) tool to manipulate digital certificates: CFSSL, and the fact that for ingress controller traefik was preferred..

The Kubernetes Control Plane components deployed during this short talk:

  • Kube API Server: the entry point into the infrastructure
  • Kube Controller Manager: the binary that launches control loops on all the apis that exist in k8s to manage the difference between current state and target state.
  • Kube Scheduler: the planner/workflow manager: the one that actually creates the jobs that are required by the kube controller manager to align the target state with the current state
  • Kubelet: the agent on the node: it’s the one that actually executes the tasks planned by the kube scheduler. One of the first things it does is to register itself (and the node) on kube api -> containerd is the container engine that actually initiates the container
  • CNI container network is the element that manages the communication between container

5.

Next: Réagir à temps aux menaces dans vos clusters Kubernetes avec Falco et son écosystème with Rachid Zarouali and Thomas Labarussias.


What I found:

  • Falco is made by Sysdig – and that Sysdig is founded by Loris Degioanni. My first main professional achievement was moving a network stack (re-implement a subset of ARP, IP, UDP protocols) on winpcap: using winpcap to both “sniff” ethernet packets and also to transmit them with high throughput and precise timing (NDIS drivers… good times!). Winpcap was the base of Ethereal which then become WireShark, and which now continues to live in Falco… I never expected to find echoes of my career beginnings here.
  • Falco is (now?) eBPF-based, supporting older kernel versions (even pre-kubernetes) which makes sense with its lineage that can be traced to libpcap
  • Falco is a newly (2024) CNCF Graduated Project
  • Falco plugins are signed with cosign
  • falcosidekick – plugin adds additional functionality – created by Thomas Labarussias.

Integrating all the Falco ecosystem you get: detection -> notification -> falcosidekick (actions?)-> reaction (falco-talon)

  • Falco Talon is a response engine (you could implement an alternative response engine with Argo CD but Falco Talon has the potential of better integration): github.com/falco-talon/falco-talon

Side Note: there is a cool K8s UI tool that was used by Thomas Labarussias during the demo: https://k9scli.io

6.

Next to: Becoming a cloud-native physician: Using metric and traces to diagnose our cloud-native applications (Grace Jansen – IBM)

The general spirit of the talk was that in order to be able to correctly diagnose the behavior of your distributed application(s) you always need: context/correlation + measurements.

The measurements are provided using:

  • instrumentation – instrument systems and apps to collect relevant data:
    • metrics – talk emphasised the use of the microprofile.io : an open source community specification for Enteprise Java – “Open cloud-native Java APIs”
    • traces,
    • logs
    • (+ end-user monitoring + profiling)

Storing the measurements is important: send the data to a separate external system that can store and analyse it – Jaeger, Zipkin

Visualize/analyze: provide visualisations and insights into systems as a whole (Grafana?)

Back to Distributed Tracing and OpenTelemetry:

A trace contains multiple spans, a spans contains: span_id, a name, time related data, log messages and metadata to give information about what occurs during a transaction and the context an immutable object contained in the span data to identify the unique request that each span is part of: trace_id, parent_id.

The talk ended with a demo of how all these elements come together in OpenLiberty (OpenLiberty is the new, improved, stripped-down WebSphere).

Side Note: an interesting online demo/IDE tool – Theia IDE.

7.

Next: Au secours ! Mon manager me demande des KPIs ! by Geoffrey Graveaud

The third talk mentioning Dora metrics, 2nd that mentions the Accelerate book, the first (and last) that mentions Team Topologies.


The talk adopted a “star wars inspired” story format (which resonates I guess with all the Project Unicorn, Phoenix, etc. readers interested in agile/lean processes). I usually find this style annoying but this time it was well done and Geoffrey’s acting skills are really impressive.

The problem is classical: the organisation (the top) asks some KPIs to measure the software development performance that are artificial when projected onto the real life of the team (even if in this case the top is enlightened enough to be aware of the DORA metrics)

The solution is new: instead of going on the usual “the top/business doesn’t understand so explain them or ignore them” – what Geoffrey offers is: look into your real process, measure the elements that are available and part of your reality and correlate them with the requested KPIs (or, in any case this is my interpretation) – even if they will not fully answer to the request.

Side Note: the sandwich management method

Conclusion

From the talks I attended is emerging a general pattern of renewed focus on measurement: for processes (DORA) and for software (OpenTelemetry).

I’d like to think this focus is driven by the de fact that Agile’s principle of Inspect and Adapt becomes second nature in the industry (but is hard to defend this theory in practice knowing how few true Agile organisations are around…)

Of course, for me, Voxxed Days is more than talks and technology – it’s about the people, it’s about meeting familiar faces – people you worked with or just crossed at a meet-up in the city and share the same passion for technology, the software community and the society we are building through our work.

In summary: very good first day – I learned useful things and I’m happy to see that the DORA metrics, the Accelerate book and even really new(2019!) ideas like Team Topologies are finally making inroads into the small corner of the global software industry that Luxembourg represents!

How Google <really> runs production systems

#articles #cloud

If there is a book that engineers working in operations have (started to)read in the past 8 years, chances are that the book’s title is Site Reliability Engineering with the subtitle: HOW GOOGLE RUNS PRODUCTION SYSTEMS… except that it doesn’t, as we will see below.

« During the initial deployment of a Google Cloud VMware Engine (GCVE) Private Cloud for the customer using an internal tool, there was an inadvertent misconfiguration of the GCVE service by Google operators due to leaving a parameter blank. This had the unintended and then unknown consequence of defaulting the customer’s GCVE Private Cloud to a fixed term, with automatic deletion at the end of that period. »

The customer in this case was UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members.

UniSuper had the nasty surprise on 2nd of May, 2024 to see its multi-geography cloud infrastructure… disappear. It took them almost 2 weeks (until 15th of May) to be fully back online – and that mostly thanks to offline backups (see this article] from arstechnica)

Read the official account from google here.