Well on the Wee!

Introducing my new, early-morning-before-the-kids-wake-up, AI-assisted project: Wee.lu (Wee means “path” in Luxembourgish).

The Wee

In June 2025, I came across the story of the Zebra30 project. ZUG (Zentrum fir Urban Gerechtegkeet) had crowdsourced information about dangerous crossings in the city — and when the city refused to share its own analysis on crossing compliance, ZUG took them to court. After nearly four years of legal proceedings, they won, and turned that hard-won data access into a map letting residents see how they’re affected — giving them the power to pressure the administration into fixing things. The win mattered on multiple levels: it’s how many people first heard about ZUG (myself included), and it exposed just how far public data access still has to go. It may also have paved the way for broader access, establishing that: “A database that describes ‘a factual situation at a specific moment in time’ is public”(see here).

Around the same time, I “attended” the SciNoj #1 virtual conference and was struck by Heather Moore-Farley’s session on “The Impact of Lane Reductions”. She showed how detailed, accurate public data — specifically, California crash data — analyzed and visualized with Clojure could help a community drive real change and improve safety. After 23 years of building software professionally, and all the talk about how “software is eating the world,” the belief that software could genuinely change things for the better — the very idea that had drawn me into this field — had somehow quietly faded. This talk, and the energy of the conference in general, rekindled that original spark.

What I felt the Zebra30 map was missing was traffic context: a way to measure the real consequences of current conditions, to show how risky each area truly is. A dangerous crossing in a quiet, remote corner of the city is arguably less urgent than one right next to a known accident hotspot. I wanted to add that layer — so I started looking for data on the geographical distribution of accidents across Luxembourg.

Luxembourg has an open data policy (mandated by the Law of 14 September 2018), and several administrations do publish data for open access. Yet I couldn’t find detailed, historically accurate accident location data. There’s simply no public view of where accidents happen across the Grand Duchy.

And yet… this data does exist. Accidents are traffic events, and traffic events are reported in real time by multiple sources in Luxembourg: acl.lu, cita.lu, rtl.lu/mobiliteit/trafic. What’s missing is the memory of these events — specifically, when and where they happened. Someone needs to record that; someone just needs to remember. And memory is everything: LLMs, if nothing else, have made abundantly clear over the past few years just how essential full context is to correctly planning the next step.

But transforming live, real-time events into context — assembling that memory — used to take money, skill, and serious effort. At least, that used to be the case.

“The Cloud” has spent the last decade driving down the cost and technical barrier to running persistent services and storing large amounts of data. Between GitHub Actions (thousands of free minutes) and Google Cloud Storage (tens of GB for a few cents), cost is no longer a barrier. And complexity? You used to need a machine somewhere, Linux skills, the whole deal. Now most of it is a few clicks or a few lines of YAML.

The resources are there. The serverless platforms are ready. The storage is waiting. But you still needed to write the code, test it, and know how to handle every layer of a working solution: back-end, front-end, operations, design, security. Even if you were comfortable with some of it, chances are you dreaded the rest — or simply weren’t good at it.

Enter Claude Code, Codex, Cursor, and friends. They don’t dread any of it — if anything, they’re a little too eager, sometimes reaching for things they shouldn’t touch. They’re not perfect at everything, but if you don’t rush, don’t try to do it all in one go, and force them to stay within guardrails — steering them with your real-world experience of what actually works (and what doesn’t), which is now the main missing ingredient — they’ll help you build all the pieces of a working solution.

The emphasis has shifted slightly: less about raw energy, skill, and curiosity; more about clear vision, experience with the full software development cycle, and a method adapted to these new tools. Not that the first set no longer matters — it’s just increasingly offset by what the tools can do. And there’s a useful side effect of carefully managing context and history for your code assistant: you can always pick up right where you left off, whenever you have a spare hour. That’s ideal when your project only gets early mornings.

Where things stand (early March 2026):

  • I’ve been collecting accident data since July 2025 — so I’m finally starting to have enough to spot some trends.
  • I have a first clean version of the visualization map, with the ZUG data layered in for context.

There are still bugs, some that I know:

  • Translation errors in the interface
  • Selection issues that vary depending on screen size and zoom level
  • … and I’m sure plenty I don’t …

What’s next:

  • Continue updating the data monthly
  • Experiment with different visualisations (maybe highlighting road segments?)
  • Improve data consolidation so that nearby markers on the same road don’t cluster awkwardly
  • Improve positioning at intersections, to more accurately indicate which road is affected

Further evolution (my current ideas – others are welcomed!):

  • Could Wee.lu use historic Meteolux weather datasets to show how rainy days compare to dry ones? Or to see if certain roads are more accident-prone under specific conditions — snow, for instance?
  • Maybe add some ways to select specific time frames for the visualisations: periods, night/day selection.

Of course, all of this means more early mornings — so I can’t really say when any of these ideas will actually make it onto the site.

An early conclusion:

It seems that now the present can become actionable memory—community memory, community tools: the dream of Eric S Raymond of the citizen programmer is closer than ever. Will this dream transform our communities and make our lives better, fairer, happier? Or are we going to drown in AI slop? I cast my vote with the first— while admittedly risking to do the second…

What do you think? Take a look at Wee.lu and let me know in the comments.

A day at Voxxed Days Luxembourg 2024

The big boys have their Devoxxes and KubeCons, Luxembourg has the Voxxed Days Luxembourg conference which, as we like to think about a lot of events here in Luxembourg, it is for sure smaller but maybe it is also a bit more refined… maybe.

Thursday, on 21st of June 2024, was the first day of Voxxed Days Luxembourg 2024 – the main local software conference organized by the Java User Group of Luxembourg – and what follows are my notes from this day.

1.

Showing the importance of the space industry for the Luxembourg software community(or just as a confirmation that space tech is cool) the conference was kick-started by Pierre Henriquet’s keynote Les robots de l’espace

… It’s true that – to bring the audience with their feet on the ground – software is mentioned twice as the thing that killed a robot in space: first on 13th November 1982 when the end of Viking 1 mission was brought by a software update and then in 1998 when a misalignment between the implementations provided by different NASA suppliers killed the Mars Climate Orbiter.

Still, the talk ends on a positive note for the viability of software in space: last week the Voyager 1 team managed to update the probe’s software to recover the data from all science instruments: an update deployed 24 billion kilometers away on hardware that has been traveling through space for 47 years.

2.

I stay in the main room where Henri Gomez presents “SRE, Mythes vs Réalités” a talk that (naturally) refers back heavily to Google’s “SRE book” – the book that kickstarted the entire label.

For me the talk confirms again that SRE for a lot of companies is either the old Ops but with some new language or a label to put on everything they do.

My personal view on SRE is that (similar to the security “shift left” that brought us DevSecOps) it is about mindset, tools and techniques and not about a job position… unfortunately (like for DevSecOps), organisations find it easier to adopt the label as a role than to actually understand the mindset shift, new tools, processes and techniques required.

First mention of the day of the Accelerate book, the DORA Metrics and the 4 Software Delivery Performance metrics.

3.

Next, I went to a smaller conference room to see the “How We Gained Observability Into Our CI/CD Pipeline” talk done by Dotan Horovits.

This was the second mention of the DORA metrics:

One of the main ideas of the talk was that the CI/CD pipelines are part of the production environment and should be monitored, traced, and analysed like production systems for issues and performance. It should not be a new idea: if what you produce is software, then your production pipeline is crucial, but the reality is that many companies still treat the CI/CD pipeline as part of something that is mainly Dev team responsibility, with the freedoms and risks that this entails..

Dotan goes into some detail on how they instrumented their pipelines :

  1. collect data on ci/cd pipeline run into the environment variables
    • create a summary step on the pipeline to collect all the info from the pipeline and store it in elastic search
  2. visualise with kibana/opensearch dashboards:
    • define your measuring needs – what you want to find/track?
    • where the failure happened (branch/machine)
    • what is the duration of each step
    • are there steps that take more time than the baseline?
  3. monitor the actual infrastructure (computers, CPUs, etc.) used to run your CI/CD pipelines using Telegraf, Prometheus. For more info check the guide

The talk includes even a short but effective introduction in Distributed Tracing and OpenTelemetry that are used for increasing visibility at every layer (OpenTelemetry was another frequent theme in the talks at the conference – and Dotan is well positioned to talk about it as he runs the OpenObservability Talks podcast)

4.

I checked then the Demystifion le functionnement interne de Kubernetes talk. It was actually a demo done by Denis Germain on how you can deploy a Kubernetes cluster one control plane brick at a time.

As somebody who was thrown into using and working with Kubernetes without having time to thoroughly understand its internals I approached this session as a student and took the opportunity to revisit basic K8s concepts.

A note on the general structure of an K8s resource:

- apiVersion: <>/v1
- metadata:
    - labels:
        - key: value
    - name: <>
- kind: Deployment/Service/Ingress/...
    - spec
        - replicas: x

A cool new (for me) tool to manipulate digital certificates: CFSSL, and the fact that for ingress controller traefik was preferred..

The Kubernetes Control Plane components deployed during this short talk:

  • Kube API Server: the entry point into the infrastructure
  • Kube Controller Manager: the binary that launches control loops on all the apis that exist in k8s to manage the difference between current state and target state.
  • Kube Scheduler: the planner/workflow manager: the one that actually creates the jobs that are required by the kube controller manager to align the target state with the current state
  • Kubelet: the agent on the node: it’s the one that actually executes the tasks planned by the kube scheduler. One of the first things it does is to register itself (and the node) on kube api -> containerd is the container engine that actually initiates the container
  • CNI container network is the element that manages the communication between container

5.

Next: Réagir à temps aux menaces dans vos clusters Kubernetes avec Falco et son écosystème with Rachid Zarouali and Thomas Labarussias.


What I found:

  • Falco is made by Sysdig – and that Sysdig is founded by Loris Degioanni. My first main professional achievement was moving a network stack (re-implement a subset of ARP, IP, UDP protocols) on winpcap: using winpcap to both “sniff” ethernet packets and also to transmit them with high throughput and precise timing (NDIS drivers… good times!). Winpcap was the base of Ethereal which then become WireShark, and which now continues to live in Falco… I never expected to find echoes of my career beginnings here.
  • Falco is (now?) eBPF-based, supporting older kernel versions (even pre-kubernetes) which makes sense with its lineage that can be traced to libpcap
  • Falco is a newly (2024) CNCF Graduated Project
  • Falco plugins are signed with cosign
  • falcosidekick – plugin adds additional functionality – created by Thomas Labarussias.

Integrating all the Falco ecosystem you get: detection -> notification -> falcosidekick (actions?)-> reaction (falco-talon)

  • Falco Talon is a response engine (you could implement an alternative response engine with Argo CD but Falco Talon has the potential of better integration): github.com/falco-talon/falco-talon

Side Note: there is a cool K8s UI tool that was used by Thomas Labarussias during the demo: https://k9scli.io

6.

Next to: Becoming a cloud-native physician: Using metric and traces to diagnose our cloud-native applications (Grace Jansen – IBM)

The general spirit of the talk was that in order to be able to correctly diagnose the behavior of your distributed application(s) you always need: context/correlation + measurements.

The measurements are provided using:

  • instrumentation – instrument systems and apps to collect relevant data:
    • metrics – talk emphasised the use of the microprofile.io : an open source community specification for Enteprise Java – “Open cloud-native Java APIs”
    • traces,
    • logs
    • (+ end-user monitoring + profiling)

Storing the measurements is important: send the data to a separate external system that can store and analyse it – Jaeger, Zipkin

Visualize/analyze: provide visualisations and insights into systems as a whole (Grafana?)

Back to Distributed Tracing and OpenTelemetry:

A trace contains multiple spans, a spans contains: span_id, a name, time related data, log messages and metadata to give information about what occurs during a transaction and the context an immutable object contained in the span data to identify the unique request that each span is part of: trace_id, parent_id.

The talk ended with a demo of how all these elements come together in OpenLiberty (OpenLiberty is the new, improved, stripped-down WebSphere).

Side Note: an interesting online demo/IDE tool – Theia IDE.

7.

Next: Au secours ! Mon manager me demande des KPIs ! by Geoffrey Graveaud

The third talk mentioning Dora metrics, 2nd that mentions the Accelerate book, the first (and last) that mentions Team Topologies.


The talk adopted a “star wars inspired” story format (which resonates I guess with all the Project Unicorn, Phoenix, etc. readers interested in agile/lean processes). I usually find this style annoying but this time it was well done and Geoffrey’s acting skills are really impressive.

The problem is classical: the organisation (the top) asks some KPIs to measure the software development performance that are artificial when projected onto the real life of the team (even if in this case the top is enlightened enough to be aware of the DORA metrics)

The solution is new: instead of going on the usual “the top/business doesn’t understand so explain them or ignore them” – what Geoffrey offers is: look into your real process, measure the elements that are available and part of your reality and correlate them with the requested KPIs (or, in any case this is my interpretation) – even if they will not fully answer to the request.

Side Note: the sandwich management method

Conclusion

From the talks I attended is emerging a general pattern of renewed focus on measurement: for processes (DORA) and for software (OpenTelemetry).

I’d like to think this focus is driven by the de fact that Agile’s principle of Inspect and Adapt becomes second nature in the industry (but is hard to defend this theory in practice knowing how few true Agile organisations are around…)

Of course, for me, Voxxed Days is more than talks and technology – it’s about the people, it’s about meeting familiar faces – people you worked with or just crossed at a meet-up in the city and share the same passion for technology, the software community and the society we are building through our work.

In summary: very good first day – I learned useful things and I’m happy to see that the DORA metrics, the Accelerate book and even really new(2019!) ideas like Team Topologies are finally making inroads into the small corner of the global software industry that Luxembourg represents!