Well on the Wee!

Introducing my new, early-morning-before-the-kids-wake-up, AI-assisted project: Wee.lu (Wee means “path” in Luxembourgish).

The Wee

In June 2025, I came across the story of the Zebra30 project. ZUG (Zentrum fir Urban Gerechtegkeet) had crowdsourced information about dangerous crossings in the city — and when the city refused to share its own analysis on crossing compliance, ZUG took them to court. After nearly four years of legal proceedings, they won, and turned that hard-won data access into a map letting residents see how they’re affected — giving them the power to pressure the administration into fixing things. The win mattered on multiple levels: it’s how many people first heard about ZUG (myself included), and it exposed just how far public data access still has to go. It may also have paved the way for broader access, establishing that: “A database that describes ‘a factual situation at a specific moment in time’ is public”(see here).

Around the same time, I “attended” the SciNoj #1 virtual conference and was struck by Heather Moore-Farley’s session on “The Impact of Lane Reductions”. She showed how detailed, accurate public data — specifically, California crash data — analyzed and visualized with Clojure could help a community drive real change and improve safety. After 23 years of building software professionally, and all the talk about how “software is eating the world,” the belief that software could genuinely change things for the better — the very idea that had drawn me into this field — had somehow quietly faded. This talk, and the energy of the conference in general, rekindled that original spark.

What I felt the Zebra30 map was missing was traffic context: a way to measure the real consequences of current conditions, to show how risky each area truly is. A dangerous crossing in a quiet, remote corner of the city is arguably less urgent than one right next to a known accident hotspot. I wanted to add that layer — so I started looking for data on the geographical distribution of accidents across Luxembourg.

Luxembourg has an open data policy (mandated by the Law of 14 September 2018), and several administrations do publish data for open access. Yet I couldn’t find detailed, historically accurate accident location data. There’s simply no public view of where accidents happen across the Grand Duchy.

And yet… this data does exist. Accidents are traffic events, and traffic events are reported in real time by multiple sources in Luxembourg: acl.lu, cita.lu, rtl.lu/mobiliteit/trafic. What’s missing is the memory of these events — specifically, when and where they happened. Someone needs to record that; someone just needs to remember. And memory is everything: LLMs, if nothing else, have made abundantly clear over the past few years just how essential full context is to correctly planning the next step.

But transforming live, real-time events into context — assembling that memory — used to take money, skill, and serious effort. At least, that used to be the case.

“The Cloud” has spent the last decade driving down the cost and technical barrier to running persistent services and storing large amounts of data. Between GitHub Actions (thousands of free minutes) and Google Cloud Storage (tens of GB for a few cents), cost is no longer a barrier. And complexity? You used to need a machine somewhere, Linux skills, the whole deal. Now most of it is a few clicks or a few lines of YAML.

The resources are there. The serverless platforms are ready. The storage is waiting. But you still needed to write the code, test it, and know how to handle every layer of a working solution: back-end, front-end, operations, design, security. Even if you were comfortable with some of it, chances are you dreaded the rest — or simply weren’t good at it.

Enter Claude Code, Codex, Cursor, and friends. They don’t dread any of it — if anything, they’re a little too eager, sometimes reaching for things they shouldn’t touch. They’re not perfect at everything, but if you don’t rush, don’t try to do it all in one go, and force them to stay within guardrails — steering them with your real-world experience of what actually works (and what doesn’t), which is now the main missing ingredient — they’ll help you build all the pieces of a working solution.

The emphasis has shifted slightly: less about raw energy, skill, and curiosity; more about clear vision, experience with the full software development cycle, and a method adapted to these new tools. Not that the first set no longer matters — it’s just increasingly offset by what the tools can do. And there’s a useful side effect of carefully managing context and history for your code assistant: you can always pick up right where you left off, whenever you have a spare hour. That’s ideal when your project only gets early mornings.

Where things stand (early March 2026):

  • I’ve been collecting accident data since July 2025 — so I’m finally starting to have enough to spot some trends.
  • I have a first clean version of the visualization map, with the ZUG data layered in for context.

There are still bugs, some that I know:

  • Translation errors in the interface
  • Selection issues that vary depending on screen size and zoom level
  • … and I’m sure plenty I don’t …

What’s next:

  • Continue updating the data monthly
  • Experiment with different visualisations (maybe highlighting road segments?)
  • Improve data consolidation so that nearby markers on the same road don’t cluster awkwardly
  • Improve positioning at intersections, to more accurately indicate which road is affected

Further evolution (my current ideas – others are welcomed!):

  • Could Wee.lu use historic Meteolux weather datasets to show how rainy days compare to dry ones? Or to see if certain roads are more accident-prone under specific conditions — snow, for instance?
  • Maybe add some ways to select specific time frames for the visualisations: periods, night/day selection.

Of course, all of this means more early mornings — so I can’t really say when any of these ideas will actually make it onto the site.

An early conclusion:

It seems that now the present can become actionable memory—community memory, community tools: the dream of Eric S Raymond of the citizen programmer is closer than ever. Will this dream transform our communities and make our lives better, fairer, happier? Or are we going to drown in AI slop? I cast my vote with the first— while admittedly risking to do the second…

What do you think? Take a look at Wee.lu and let me know in the comments.

Scratching the Vibe – The Vibe

Where the code meets the real world

Building, packaging, getting a real application running on different platforms, on different machines: that’s where your code meets the real world. As a programmer, back in the day, this is where most of my debugging hours went.

At first, using code assistants to build obsidize made it feel like this time would be different.

I discovered Clojure in 2017. The company I worked for was going through an “agile transformation” and, to motivate everyone, handed out copies of The Phoenix Project by Gene Kim. Clojure only gets a few lines in the book—but it’s the language that saves the company, and I took notice. I was already transitioning into coordination/leadership roles, and “getting Clojure” was my way to stay connected to the implementation side. I read the books, built toy apps, but I never shipped a complete, production-ready Clojure application.

In my software developer days, I deployed Java applications to production—almost exclusively back end, single platform, with an Ops team close by.

This time, the Ops team would be Claude, ChatGPT, and Gemini.

The Team

I’ve used Claude-Code since the beta phase. I “got in” at the end of January 2025 and was immediately blown away.

Claude—and most AI tools in VS Code—got a flurry of updates in the first half of August. Every two or three days the Claude-Code extension added something new: agentic mode with custom personas, security-review for uncommitted changes or CI/CD integration, etc. Cool, powerful features that often made me think whatever I did two days earlier could’ve been done easier if I’d just waited.

Occasionally, changes introduced regressions or at least odd behavior. After one update, Claude started committing and pushing changes without my approval. The security-review feature, out of the box, expects the default branch to be set (git remote set-head) and otherwise fails with a cryptic message.

There’s a lot to like about Claude-Code, but for me the killer feature is Custom MCPs—especially how clojure-mcp lets it navigate, modify, and test Clojure applications. So for most of the code development I used Claude-Code—in VS Code or the CLI—jumping out when usage limits kicked in or I wanted a second opinion, which I usually got from a specialized ChatGPT, Clojure Mentor.

For build, packaging, and deployment, I leaned on Gemini and another specialized model, DevOps ChatGPT—also because, at the beginning of August, Claude felt less capable in those areas.

Aiming high

My target was what I ask of any tool: fast, secure, reliable, easy to install, update, and use.

For a Clojure application, fast means GraalVM—melting the Java fat and startup overhead into a native image on par with C/C++. Secure meant up-to-date dependencies and vulnerability scanning. Reliable meant better logging. And easy to install/update—on macOS—means Homebrew.

My AI team supported those choices (they always do!) and gave me starter code and guidance for building the native image and a Homebrew distribution. For Windows it suggested Chocolatey—new to me, but it sounded right.

Getting the CI/deployment pipeline right for macOS (ARM) was relatively easy. Developing on my M2 Air, I could test and fix as needed. I didn’t focus on other platforms at first; I wanted a quick “walking skeleton” with the main gates and stages, then I’d backfill platform support.

The GraalVM build was trickier. I fumbled with build and initialization flags but eventually got it to compile.

In a few steps I had a Homebrew tap ready to install. Hurray!

Reality check

And then reality kicked in. The native image wouldn’t run. The flags I used made the build pass but execution fail. Cue a round of whack-a-mole, primarily with Gemini.

One interesting Gemini-in-VS-Code detail: at the beginning of August, the extension’s interaction model was chat; after an update, it introduced agentic mode. Proof of how fast assistants evolve—and a chance to feel both the pros and cons of each mode.

Before the mid-August update, chats with Gemini 2.5 Pro were fruitful and generally more accurate than other models—just a lot slower. I didn’t measure it, but it felt really slow. Still, solid answers were worth the wait.

After the August 15 release, chat began to… silently fail: after a long reflection, no answer. So I switched to the agent.

Agentic Gemini looked promising: deep VS Code integration (you see diffs as they happen, which Claude-Code later matched), MCP servers, and a similar configuration to Claude—so wiring up clojure-mcp was easy. However, it just didn’t do Clojure or Babashka well. It got stuck in parentheses hell at almost every step. Sessions ended with tragic messages like, “I’m mortified by my repeated failures…” Eventually I felt bad for it. In their (troubling?) drive to make LLMs sound human, Google had captured the awkwardness of incompetence a little too well.

I started to panic. Everything had moved so fast, and I had so many things started—so many “walking skeletons”—with only a few vacation days left. I realized “my AI team” wasn’t going to get me over the finish line alone.

Vacations are too short

The leisurely vacation development phase had to end. I began waking up earlier for uninterrupted work before 8 a.m., skipping countryside trips and extended family visits to squeeze in a few more hours each day.

At that point, shipping was about testing, running, iterating—not generating more lines of code.

The failing native image led me to add a post-packaging end-to-end validation stage, to catch these issues earlier.

After a few tries, the culprit seemed to be the Cheshire JSON library. With no better theory, I switched to the standard clojure.data.json. Unit, integration, and E2E tests made the change a matter of minutes—but it didn’t fully resolve the native-image issues.

All the while I looped through ChatGPT, Claude, Gemini, and old-school Googling: find plausible theories, try fixes, check whether anything improved. This isn’t unique to LLMs—developers have done this for decades to ship in ecosystems they don’t fully control. If it works, it works.

Finally, I got my first successful Homebrew install and full import on macOS ARM. It seemed feasible again to finish before vacation ended.

Then I added Windows and Linux packaging—and everything failed again. I told myself it was a platform access issue, bought a Parallels Desktop license… and then remembered I’m on ARM trying to target x86. Not going to work.

LLMs gave me speed, but the bottleneck was the real world: deployment time, runtime, access to hardware. Without x86 macOS, Linux, or Windows boxes, I cut scope: native image Homebrew package for macOS ARM (my main platform), and an executable image via jlink for other platforms. Even that was painful on Windows, where builds kept failing and GPTs kept getting confused by basic shell automation. Final plan: native image for macOS ARM (done), jlink package (JRE + JAR) for macOS x64 and Linux arm64, and just the JAR for Windows. Time to stop and get back to reality.

Release It!

Stopping is harder than it sounds in the world of code assistants. When tools get more powerful every day—and solve problems that look as hard as the ones left—it always feels like the happy ending is one prompt away. But our complexity isn’t LLM complexity, and vacations are short.

So I stopped. I polished the code, ran multiple reviews, improved the docs. The functionality was good enough for me. I’d done what I set out to do at the beginning of those two weeks. And somewhere along the way, I learned to stop worrying and love to vibe.

Obsidize your Claude conversations

This summer I built obisidize: a command line tool that imports Claude conversations and projects as notes into Obisidan.

Obsidize is a small summer vacation project that allowed me to experiment with new ways of building software and has a few nice features:

  • 🔄 Incremental Updates: Detection of new and updated content – only processes what’s changed
  • 🗂️ Structured Output: Creates an organized, “Obsidian friendly” folder structure for your conversations and projects
  • 🏷️ Custom Tagging: Allows adding custom Obsidian tags and links to imported content via the command line
  • 🔄 Sync-Safe: doesn’t use any local/external state files so the incremental updates can be run from different devices
  • 🔍 Dry Run Mode: Allows the preview of changes before applying 

If you are looking for a way to backup your conversations from Claude to Obsidian you should take a look.

It can be installed using Homebrew on MacOS and Linux or using its “universal jar” on any platform with a JRE installed (Java 21+). Check the release page.

I wrote more about how I got to do this project here.

A happy Parisian mime is getting obisidized