Well on the Wee!

Introducing my new, early-morning-before-the-kids-wake-up, AI-assisted project: Wee.lu (Wee means “path” in Luxembourgish).

The Wee

In June 2025, I came across the story of the Zebra30 project. ZUG (Zentrum fir Urban Gerechtegkeet) had crowdsourced information about dangerous crossings in the city — and when the city refused to share its own analysis on crossing compliance, ZUG took them to court. After nearly four years of legal proceedings, they won, and turned that hard-won data access into a map letting residents see how they’re affected — giving them the power to pressure the administration into fixing things. The win mattered on multiple levels: it’s how many people first heard about ZUG (myself included), and it exposed just how far public data access still has to go. It may also have paved the way for broader access, establishing that: “A database that describes ‘a factual situation at a specific moment in time’ is public”(see here).

Around the same time, I “attended” the SciNoj #1 virtual conference and was struck by Heather Moore-Farley’s session on “The Impact of Lane Reductions”. She showed how detailed, accurate public data — specifically, California crash data — analyzed and visualized with Clojure could help a community drive real change and improve safety. After 23 years of building software professionally, and all the talk about how “software is eating the world,” the belief that software could genuinely change things for the better — the very idea that had drawn me into this field — had somehow quietly faded. This talk, and the energy of the conference in general, rekindled that original spark.

What I felt the Zebra30 map was missing was traffic context: a way to measure the real consequences of current conditions, to show how risky each area truly is. A dangerous crossing in a quiet, remote corner of the city is arguably less urgent than one right next to a known accident hotspot. I wanted to add that layer — so I started looking for data on the geographical distribution of accidents across Luxembourg.

Luxembourg has an open data policy (mandated by the Law of 14 September 2018), and several administrations do publish data for open access. Yet I couldn’t find detailed, historically accurate accident location data. There’s simply no public view of where accidents happen across the Grand Duchy.

And yet… this data does exist. Accidents are traffic events, and traffic events are reported in real time by multiple sources in Luxembourg: acl.lu, cita.lu, rtl.lu/mobiliteit/trafic. What’s missing is the memory of these events — specifically, when and where they happened. Someone needs to record that; someone just needs to remember. And memory is everything: LLMs, if nothing else, have made abundantly clear over the past few years just how essential full context is to correctly planning the next step.

But transforming live, real-time events into context — assembling that memory — used to take money, skill, and serious effort. At least, that used to be the case.

“The Cloud” has spent the last decade driving down the cost and technical barrier to running persistent services and storing large amounts of data. Between GitHub Actions (thousands of free minutes) and Google Cloud Storage (tens of GB for a few cents), cost is no longer a barrier. And complexity? You used to need a machine somewhere, Linux skills, the whole deal. Now most of it is a few clicks or a few lines of YAML.

The resources are there. The serverless platforms are ready. The storage is waiting. But you still needed to write the code, test it, and know how to handle every layer of a working solution: back-end, front-end, operations, design, security. Even if you were comfortable with some of it, chances are you dreaded the rest — or simply weren’t good at it.

Enter Claude Code, Codex, Cursor, and friends. They don’t dread any of it — if anything, they’re a little too eager, sometimes reaching for things they shouldn’t touch. They’re not perfect at everything, but if you don’t rush, don’t try to do it all in one go, and force them to stay within guardrails — steering them with your real-world experience of what actually works (and what doesn’t), which is now the main missing ingredient — they’ll help you build all the pieces of a working solution.

The emphasis has shifted slightly: less about raw energy, skill, and curiosity; more about clear vision, experience with the full software development cycle, and a method adapted to these new tools. Not that the first set no longer matters — it’s just increasingly offset by what the tools can do. And there’s a useful side effect of carefully managing context and history for your code assistant: you can always pick up right where you left off, whenever you have a spare hour. That’s ideal when your project only gets early mornings.

Where things stand (early March 2026):

  • I’ve been collecting accident data since July 2025 — so I’m finally starting to have enough to spot some trends.
  • I have a first clean version of the visualization map, with the ZUG data layered in for context.

There are still bugs, some that I know:

  • Translation errors in the interface
  • Selection issues that vary depending on screen size and zoom level
  • … and I’m sure plenty I don’t …

What’s next:

  • Continue updating the data monthly
  • Experiment with different visualisations (maybe highlighting road segments?)
  • Improve data consolidation so that nearby markers on the same road don’t cluster awkwardly
  • Improve positioning at intersections, to more accurately indicate which road is affected

Further evolution (my current ideas – others are welcomed!):

  • Could Wee.lu use historic Meteolux weather datasets to show how rainy days compare to dry ones? Or to see if certain roads are more accident-prone under specific conditions — snow, for instance?
  • Maybe add some ways to select specific time frames for the visualisations: periods, night/day selection.

Of course, all of this means more early mornings — so I can’t really say when any of these ideas will actually make it onto the site.

An early conclusion:

It seems that now the present can become actionable memory—community memory, community tools: the dream of Eric S Raymond of the citizen programmer is closer than ever. Will this dream transform our communities and make our lives better, fairer, happier? Or are we going to drown in AI slop? I cast my vote with the first— while admittedly risking to do the second…

What do you think? Take a look at Wee.lu and let me know in the comments.

A Quick Look at EBIOS Risk Manager

What is EBIOS RM?

EBIOS Risk Manager (EBIOS RM) is the method published by the French National Cybersecurity Agency(ANSSI – “Agence nationale de la sécurité des systèmes d’information”) for assessing and managing digital risks(EBIOS: “Expression des Besoins et Identification des Objectifs de Sécurité” can be translated as “Description of Requirements and Identification of Security Objectives”) developed and promoted with the support of Club EBIOS (a French non-profit organization that focuses on risk management, drives the evolution of the method and proposes on its website a number of helpful resources for implementing it – some of them in English).

EBIOS RM defines a set of tools that can be adapted, selected and used depending to the objective of the project, and is compatible with the reference standards in effect, in terms of risk management (ISO 31000:2018) as well as in terms of cybersecurity (ISO/IEC 27000).

Why is it important?

Why use a formal method for your (cyber)security risk analysis and not just slap the usual cybersecurity technical solutions (LB + WAF + …) on your service?

On a (semi)philosophical note – because the first step to improvement is to start from a known best practice and then define and evolve your own specific process.

Beyond the (semi)philosophical reasons are then the very concrete regulations and certifications you may need to implement right now, and the knowledge that in the future the CRA regulation will require cybersecurity risk analysis (and proof of) for all digital products and services offered on EU market.

OK, so it is important: lets go to the next step:

How is it used?

First a few concepts

In general the target of any risk management /cybersecurity framework is to guide the organization’s decisions and actions in order to best defend/prepare itself.

While risk/failure analysis is something we all do natively, any formal practice needs to start by defining the base concepts: risk, severity, likelihood, etc.

Risk and its sources:

ISC2 – CISSP provides these definitions:

  • Risk is the possibility or likelihood that a threat will exploit a vulnerability to cause harm to an asset and the severity of damage that could result
  • a threat is a potential occurrence that may cause an undesirable or unwanted outcome for an organization or for an asset.
  • asset is anything used in a business process or task

One of the first formal methods to deal with risk was FMEA: Failure Modes, Effects and criticality Analysis that started to be used/defined in the 1940s (1950s?) in US (see wikiipedia). This is one of the first places where the use of broad severity(not relevant/ very minor/ minor/ critical/ catastrophic) and likelihood(extremely unlikely/ remote/ occasional/ reasonably possible/ frequent) categories have been defined.

ANSSI defines 4 levels of severity in EBIOS RM:

G4 – CRITICAL – Incapacity for the company to ensure all or a portion of its activity, with possible serious impacts on the safety of persons and assets. The company will most likely not overcome the situation (its survival is threatened).

G3 – SERIOUS – High degradation in the performance of the activity, with possible significant impacts on the safety of persons and assets. The company will overcome the situation with serious difficulties (operation in a highly degraded mode).

G2 – SIGNIFICANT – Degradation in the performance of the activity with no im- pact on the safety of persons and assets. The company will overcome the situation despite a few difficulties (operation in degraded mode).

G1 – MINOR – No impact on operations or the performance of the activity or on the safety of
persons and assets. The company will overcome the situation without too many difficulties (margins will be consumed).

ANSSI defines 4 levels of likelihood:

V4 – Nearly certain – The risk origin will certainly reach its target objective by one of the considered methods of attack. The likelihood of the scenario is very high.

V3 – Very likely – The risk origin will probably reach its target objective by one of the considered methods of attack. The likelihood of the scenario is high.

V2 – Likely – The risk origin could reach its target objective by one of the consi- dered methods of attack. The likelihood of the scenario is significant.

V1 – Rather unlikely. The risk origin has little chance of reaching its objective by one of the considered methods of attack. The likelihood of the scenario is low.

ANSSI defines some additional concepts:

Risk Origins (RO – this is similar to Threat Agent/Actor in ISC2 terminology) – something that potentially could exploit one ore more vulnerabilities.
Feared Events (FE – this is equivalent to Threats in ISC2 terminology)
Target Objectives(TO): the end results sought over by a Threat Agent/Actor

A side note : quantitative analysis

ISC2 – CISSP recommends using quantitative analysis for risk qualification:


Getting there requires to qualify your asset value or at least how much a risk realisation would cost you (Single Loss Expectancy) and then compute an annual loss so that you can compare rare events but costly with smaller but more frequent.

I think the two methods are compatible as nothing stops you to define afterwards some thresholds that map the value numbers to a severity class (eventually depending not only on the ALE but also on your budget/risk appetite/risk aversion)

A process of discovery

The risk management methods are at their core similar and all contain a number of steps that help establish: what is that you need to protect, what could happen to it, and what could be done to make sure the effects of what ever happen are managed (or at least accepted).

So the steps are in general (with some variance on the order and emphasis) :

  • identify assets (data, processes, physical)
  • identify vulnerabilities associated to your assets
  • identify the threats that exist in your operative environment (taking into account your security baseline)
  • identify the risks and prioritise action related to them based on their likelihood and severity
    ..cleanse and repeat.

To help with this process EBIOS RM defines 5 workshops: each one with expected inputs, outputs and participants:

Workshop 1:

Workshop 2:

Workshop 3:


Strategic scenario:

  • a potential attack with the system as a blackbox: how the attack will happen “from the exterior of the system”

Workshop 4:


Operational/Operative Scenarios – identify and describe potential attack scenarios corresponding to the strategic ones, eventually using tools like: STRIDE, OWASP , MITRE ATT&CK, OCTAVE, Trike, etc.

Workshop 5:

Risk Treatment Strategy Options (ISO27005/27001):

  • Avoid (results in a residual risk = 0) – change the context that gives rise to the risk
  • Modify (results in a residual risk > 0): add/remove or change security measures in order to decrease/modify the risk (likelihood and/or severity)
  • Share or Transfer (results in a residual risk that can be zero or greater : involve an external partner/entity (e.g. insurance)
  • Accept (the residual risk stays the same as the original risk)

In Summary:

In Conclusion:

EBIOS RM is a useful tool in the cybersecurity management, aligned with the main cybersecurity tools and frameworks.

There is also enough supporting open access materials (see ANSSI and Club EBIOS ) that help conduct and produce the required artefacts at each step of the process: templates, guides, etc. – which make it a primary candidate for adoption in organisations without an already established cybersecurity risk management practice.

The DevOps adventure – my review of “The Unicorn Project”

#devops #agile #lean

Subtitle: A Novel about Developers, Digital Disruption, and Thriving in the Age of Data
Authors: Gene Kim
Bought: 30 July 2020


Another book from Gene Kim, The Unicorn Project continues to promote DevOps and agile methods as the solutions to the world’s IT (and corporate) ills, similar to the other books by the author. Besides The Phoenix Project, Gene Kim is known (at least) for The DevOps Handbook and Accelerate (written together with Nicole Forsgren and Jez Humble).

The book re-uses the novel format introduced by The Phoenix Project to trace the archetypal trajectory of a DevOps transformation—the enterprise being the fictional “Parts Unlimited” company (another reference to The Goal?)—a company that displays all the evils of a big “legacy company” in which words like ITIL and TOGAF have replaced the beating entrepreneurial heart and poisoned the spirit of the good IT people (/insert sad violin/).

The book is split into three parts:

  • Part 1 (Chapter 1 – Chapter 7) presents the state of crisis in which “Parts Unlimited” finds itself: losing market share, an IT culture dominated by bureaucracy in which the actual IT specialists (Dev, Ops, QAs) have become mindless zombies accepting the absurdity of corporate decay: infinite request chains, roadblocks, pervasive blame games, etc. In these dark times, there is a light of hope: a few engineers that have refused to sell out their dreams and that are organizing “the resistance” to defend IT ideals.
  • Part 2 (Chapter 8 – Chapter 13) shows the resistance becoming visible and being embraced by (some) management; the new hope spreads.
  • Part 3 (Chapter 14 – Chapter 19) depicts the final battle and the happy aftermath.

The DNA of the change (and of the perfect organization) consists of “The Five Ideals”:

  1. Locality and Simplicity
  2. Focus, Flow, and Joy
  3. Improvement of Daily Work
  4. Psychological Safety
  5. Customer Focus

What shocked me is the total dedication that the Devs, Ops, and QAs have to this failed company: Maxine and her co-conspirators spend almost 100% of their (diminished) waking hours working to transform Parts Unlimited, to take responsibility for production deployments, to set up (in rogue mode) servers and deployment processes alike. Maxine’s family is presented as a passive background actor: something that is happening around her while she toils at the corporate laptop to improve the corporation’s IT.

The story brushes off, without much consideration, the security and compliance implications of direct production deployment by the DevOps team. It minimizes the human and logistic cost of operating and supporting high availability services: in The Unicorn Project, the DevOps team is happy to take responsibility for maintaining and supporting the services themselves—no worry for the on-call requirements, for wearing a pager (without any compensation).

In the end, the “rogue” aspect of the corporate transformation and especially its dependency on the employees’ readiness for seemingly endless sacrifice of self and of family time is the most puzzling and self-defeating aspect of the (assumed) objective of promoting the DevOps transformation path.

On one side, it makes the whole story less relevant to anybody looking for ways to start on the DevOps transformation path: any change is viable if you have access to the best people and if these people are willing to provide you endless time and effort… but this is never the case. In my experience, in a company like “Parts Unlimited,” you’ll soon discover that the best people have already left; those that stay around are mainly there because they have concluded that the way things are is… acceptable for some reason: job security, predictability, etc.

On the other side (and in my view, this is the worst part of The Unicorn Project): this is not needed. The “DevOps transformation” is possible by setting clear intermediary steps—steps that have an immediate advantage for the people involved and the company (e.g., fewer incidents in production, thus less personal impact).

By clear communication of the thinking behind the decisions, transparent tracking of the results to grow confidence, and by showing the team a sustainable path towards balancing private life and professional excellence, you don’t need a war—you need diplomacy, leadership, and knowing what you are doing.

Exploring the origins of Lean – my review of “The Goal”

lean #lean-through-manufacture #business-novel

Why I read it

I found references to it in Lean Enterprise (Jez Humble) and also it was mentioned by Jez in a talk I saw on youtube (maybe this one).

What is about

The road of discovery traveled by the manager of an embattled factory in US to finally realise the [[lean]] enterprise principles!

Similar to [[The Phoenix Project]] (which I guess is inspired from it) a hard working family man is forced to discover the essence of lean in order to save his factory – in the process re-connecting with his team and family… (no kidding!)

What I took from it

Start with identifying the bottlenecks in your delivery process.

The bottlenecks are those activities/resources that necessary for an end product but that have a throughput lower than the one expected throughput of the end product. Meaning: if you need to deliver 5 things per day, and you have some part of your process that can only do 3 sub-things per day – that’s a bottleneck. Everything else that has the capacity to deliver stuff higher than your expected output is a non-bottleneck (duh!)…

The goal would be to have the level of activity throughput equal with expected delivery throughput – not higher, not lower. Which requires: increasing the throughput through the bottlenecks and, eventually, decreasing the throughput through the non bottlenecks to limit the “work in progress/inventory”.

After each system change/capacity re-balancing the process needs to be re-evaluated in order to identify – potentially new bottlenecks and restart. A lot of emphasis is put on the fact that accelerating the steps/processes that are not bottlenecks, or even allowing them to go at their “natural speed” and thus faster than the bottlenecks is wasteful as it doesn’t contribute to the increase of the market throughput. So in a way, the insight is that you need to block the work being done that is not aligned with the delivery… and for me an immediate significant realisation was that Kanban most of all a system of blocking creation of wasteful work .

One way to do this re-balancing would be to flag the work that involves the bottlenecks in a way that gives it higher priority compared with the rest.

Another way is to lower the granularity of the work being done on the ‘non bottleneck’ processes so that they respond faster to the bottleneck requests.

A point made in the book is that this will increase waste as smaller granularity implies more context switching (“set-ups”) – but as long as this waste still doesn’t transform the process in a “bottleneck process” it’s ok.

What I thought

Well, it does make clear (through all sorts of examples, stories, etc.) how important it is to align all your actions/outputs with the business goal… but I kind of hate this type of story with the guy who sacrifices his family and personal time to fight agains a (broken) system imposed by “the corporation”.

Trivia

“garbage in, garbage out” : appears in this book which is from 1984 and about factory parts — after some research (wikipedia) it seems this expression (just like “having a bug”) is another gift that software engineering gave tot he world!