Scratching the Vibe – How I Learned to Stop Worrying and Love the Vibe

Thirty years ago, Eric S. Raymond was experimenting with a new way of building software.

The Linux way disrupted the software industry: massively collaborative, powered by the (mostly) unpaid contributions of coders all over the world. It multiplied possibilities and shocked everyone when it proved viable for one of the most complex software projects imaginable at the time: building an enterprise-grade operating system—free, both as in beer and as in freedom.

Raymond’s essay, The Cathedral and the Bazaar, distilled this experience into “the Linux method,” catalysing the open-source movement and a software development approach that is estimated to have created more than $8.8 trillion in value. That code powers much of today’s digital economy and paved the road for the spectacular rise of the software companies now dominating Wall Street.

Once again, we’re standing at the edge of a paradigm shift—this time powered by the transformer architecture and massive investment in large language models. And, as Raymond did three decades ago—though in my own smaller, humbler way—when I felt an unexpected technical itch earlier this year, I decided to experiment with these new tools and this new way of building software.

This is the story of obsidize and what I learned working on it, and it all started with a simple but annoying itch

Scratching the Vibe – The Itch

The summer of 2025 was hot—your local weather stats will probably confirm it (thanks, global warming!)—but for me it was especially intense at work, with delivery after delivery, milestone after milestone.

So when I finally started my vacation, I felt the need to re-map where I was. I had to tidy up the things that had accumulated around me while I ran back and forth between work, kids, and more work.


It’s probably no surprise that today, when we “check our environment,” one of the first places we look is our digital space: notes, bookmarks, appointments, personal projects. For me, Obsidian is the system that lets me navigate all this in a way that gives me the illusion—sometimes even the reality—of being in control.

I’ve used Obsidian as my main knowledge management system for two years. Before that, I always used “something” (usually several things at once) to make sense of the endless stream of data from cyberspace: Pocket notes, Reader saves, Kindle highlights. Obsidian, with its vast plugin ecosystem, finally promised to consolidate all that into a usable, productive (?) ecosystem.

Around the same time, I started playing with large language models. At first, it was just testing: could ChatGPT give me a quick summary? But soon it became reflex: start with a dialog, get the lay of the land, then dig deeper into what looked interesting.

Over the last year, I’ve leaned more on Claude from Anthropic—mainly for Claude-Code, but also as a counterpoint to ChatGPT. I like having multiple sources before zooming in on a strategy. The result was an ever-growing pile of notes, conversations, and projects dangling outside my “second brain.” Time to clean that up.

ChatGPT was easy. Community plugins already handled syncing conversations with Obsidian. But I was surprised to find Claude unsupported. (To be clear: from the start I excluded browser extensions and other intrusive options that want wide access to my system.)

Digging further, I discovered that Claude had only recently introduced an export feature, driven by GDPR compliance. Anthropic’s help documentation suggests it was in development as of mid-2024. One small win for EU regulations.

At first, using ChatGPT, Claude, Grok—whatever—felt gimmicky. Like nothing of importance could really accumulate there. But reality disagreed. These chats became central to how I consume and create information. My decisions, my interests, my thinking all sedimented in the chat history of LLMs.

So while I began looking for a quick copy-paste solution to consolidate my notes, the deeper I dug, the more important it became to make this data fully mine, integrated into the system I already use and control.

The good news: the data are available. The bad news: they come as JSON—hardly usable as-is, especially in Obsidian. At first, it looked like I’d have to wait for someone to write a plugin to tidy this corner of my digital self.

But then I thought: wouldn’t it be fair to ask Claude to help me take full ownership of my discussions with Claude?

That itch wouldn’t let me go. And so my vacation turned into a trip into the world of vibe coding: The Scratch.

Scratching the Vibe – The Scratch

To build obsidize I started with a simple—or at least straightforward—problem: convert data stored in JSON files to Markdown.

On paper, conversion between standard formats should be one of the strong points of LLMs. In practice, a direct request to Claude or ChatGPT exposed a familiar disadvantage: their wordy answers and tight context limits. The exported files from Claude were simply too big for the standard web interfaces.

And it wasn’t going to be a one-off operation. I’d keep using Claude, creating new conversations, extending old ones. I didn’t want to manually manage exports each time.

So my scope grew. What I actually needed was a tool that could:

  1. Convert JSON data to Markdown files.
  2. Update existing conversations/projects in Markdown with new content, without overwriting edits made directly in Obsidian.

Parsers are supposed to be another strength of LLMs—but the context window ruled out Claude and ChatGPT for this first stage.

That’s when I turned to Google’s Gemini 2.5 Pro. With its huge context window, Gemini had no problem accepting an entire conversation file and proposing a parser in Clojure/Babashka that ran successfully on the first go. Same for the projects file. This is Gemini’s real advantage: not something impossible for other GPTs, especially in agent mode with MCP, but far easier when you just need a quick, working answer.

Now I had two tools/scripts doing essentially the same thing—converting JSON to Markdown—but no update logic.

So I decided to simplify first: solve the conversion problem cleanly before tackling updates.

I spun up a standard Clojure project structure (with deps-new), ported Gemini’s Babashka scripts into namespaces, and asked Claude-Code to turn it into a CLI application.

We discussed the tech stack. For JSON parsing, Claude suggested (and I accepted) Cheshire, a well-established, high-performance library built on Jackson.

Then came the real challenge: implementation choices.


Copilots tend to overdo it. They generate too many options, and every option looks equally viable. The risk is saying “yes” too often and ending up in a swamp of half-working approaches. If you already know the right solution, you can use AI just to speed up coding—but that underuses the tools. If, like me, you often don’t know the right approach, you need to push back and test constantly. Otherwise you get caught in endless loops of “ah, I see the error / the issue is still there.”

With Gemini’s proof-of-concept already working, I just needed to refactor and add a few key features:

  1. Detect input as either an archive (zip/dms) or folder.
  2. Tag and link imported notes for Obsidian.
  3. Most importantly, detect already-imported notes and update them without overwriting edits in Obsidian.

At this point, what started as “tidying up my notes” turned into a small personal quest. I wanted to scratch my own itch, but also to try this new way of building software with new tools. Luckily, my vacation plans were simple: kids, grandparents, clean air, fresh food, plenty of sun.

After about two hours of effective work—spread over three mornings—I had a working Clojure project. Not packaged, but functional. It had documentation, unit tests, and just enough features to be useful.


One incidental advantage of working in VS Code with Gemini and Claude-Code: context switching was painless. Every morning, I could skim the conversation history, remember where we left off, and pick up seamlessly. When the kids woke up, I’d close the lid and shift gears. The next morning, my virtual coworkers were still waiting, ready to continue as if nothing had happened.

With the basics so easy to conjure with digital assistants—and mornings still left in my vacation—I felt confident enough to push toward a real app. The linter and automated tests were already there, but I wanted more:

  • Add antq to ensure up-to-date libraries.
  • Add Trivy for vulnerability and license scanning.
  • Build native images for macOS and Linux using GraalVM.
  • Make it installable via Homebrew.

That last step—going from project to product—turned out to be the hardest.

To be continued: The Vibe

Scratching the Vibe – The Vibe

Where the code meets the real world

Building, packaging, getting a real application running on different platforms, on different machines: that’s where your code meets the real world. As a programmer, back in the day, this is where most of my debugging hours went.

At first, using code assistants to build obsidize made it feel like this time would be different.

I discovered Clojure in 2017. The company I worked for was going through an “agile transformation” and, to motivate everyone, handed out copies of The Phoenix Project by Gene Kim. Clojure only gets a few lines in the book—but it’s the language that saves the company, and I took notice. I was already transitioning into coordination/leadership roles, and “getting Clojure” was my way to stay connected to the implementation side. I read the books, built toy apps, but I never shipped a complete, production-ready Clojure application.

In my software developer days, I deployed Java applications to production—almost exclusively back end, single platform, with an Ops team close by.

This time, the Ops team would be Claude, ChatGPT, and Gemini.

The Team

I’ve used Claude-Code since the beta phase. I “got in” at the end of January 2025 and was immediately blown away.

Claude—and most AI tools in VS Code—got a flurry of updates in the first half of August. Every two or three days the Claude-Code extension added something new: agentic mode with custom personas, security-review for uncommitted changes or CI/CD integration, etc. Cool, powerful features that often made me think whatever I did two days earlier could’ve been done easier if I’d just waited.

Occasionally, changes introduced regressions or at least odd behavior. After one update, Claude started committing and pushing changes without my approval. The security-review feature, out of the box, expects the default branch to be set (git remote set-head) and otherwise fails with a cryptic message.

There’s a lot to like about Claude-Code, but for me the killer feature is Custom MCPs—especially how clojure-mcp lets it navigate, modify, and test Clojure applications. So for most of the code development I used Claude-Code—in VS Code or the CLI—jumping out when usage limits kicked in or I wanted a second opinion, which I usually got from a specialized ChatGPT, Clojure Mentor.

For build, packaging, and deployment, I leaned on Gemini and another specialized model, DevOps ChatGPT—also because, at the beginning of August, Claude felt less capable in those areas.

Aiming high

My target was what I ask of any tool: fast, secure, reliable, easy to install, update, and use.

For a Clojure application, fast means GraalVM—melting the Java fat and startup overhead into a native image on par with C/C++. Secure meant up-to-date dependencies and vulnerability scanning. Reliable meant better logging. And easy to install/update—on macOS—means Homebrew.

My AI team supported those choices (they always do!) and gave me starter code and guidance for building the native image and a Homebrew distribution. For Windows it suggested Chocolatey—new to me, but it sounded right.

Getting the CI/deployment pipeline right for macOS (ARM) was relatively easy. Developing on my M2 Air, I could test and fix as needed. I didn’t focus on other platforms at first; I wanted a quick “walking skeleton” with the main gates and stages, then I’d backfill platform support.

The GraalVM build was trickier. I fumbled with build and initialization flags but eventually got it to compile.

In a few steps I had a Homebrew tap ready to install. Hurray!

Reality check

And then reality kicked in. The native image wouldn’t run. The flags I used made the build pass but execution fail. Cue a round of whack-a-mole, primarily with Gemini.

One interesting Gemini-in-VS-Code detail: at the beginning of August, the extension’s interaction model was chat; after an update, it introduced agentic mode. Proof of how fast assistants evolve—and a chance to feel both the pros and cons of each mode.

Before the mid-August update, chats with Gemini 2.5 Pro were fruitful and generally more accurate than other models—just a lot slower. I didn’t measure it, but it felt really slow. Still, solid answers were worth the wait.

After the August 15 release, chat began to… silently fail: after a long reflection, no answer. So I switched to the agent.

Agentic Gemini looked promising: deep VS Code integration (you see diffs as they happen, which Claude-Code later matched), MCP servers, and a similar configuration to Claude—so wiring up clojure-mcp was easy. However, it just didn’t do Clojure or Babashka well. It got stuck in parentheses hell at almost every step. Sessions ended with tragic messages like, “I’m mortified by my repeated failures…” Eventually I felt bad for it. In their (troubling?) drive to make LLMs sound human, Google had captured the awkwardness of incompetence a little too well.

I started to panic. Everything had moved so fast, and I had so many things started—so many “walking skeletons”—with only a few vacation days left. I realized “my AI team” wasn’t going to get me over the finish line alone.

Vacations are too short

The leisurely vacation development phase had to end. I began waking up earlier for uninterrupted work before 8 a.m., skipping countryside trips and extended family visits to squeeze in a few more hours each day.

At that point, shipping was about testing, running, iterating—not generating more lines of code.

The failing native image led me to add a post-packaging end-to-end validation stage, to catch these issues earlier.

After a few tries, the culprit seemed to be the Cheshire JSON library. With no better theory, I switched to the standard clojure.data.json. Unit, integration, and E2E tests made the change a matter of minutes—but it didn’t fully resolve the native-image issues.

All the while I looped through ChatGPT, Claude, Gemini, and old-school Googling: find plausible theories, try fixes, check whether anything improved. This isn’t unique to LLMs—developers have done this for decades to ship in ecosystems they don’t fully control. If it works, it works.

Finally, I got my first successful Homebrew install and full import on macOS ARM. It seemed feasible again to finish before vacation ended.

Then I added Windows and Linux packaging—and everything failed again. I told myself it was a platform access issue, bought a Parallels Desktop license… and then remembered I’m on ARM trying to target x86. Not going to work.

LLMs gave me speed, but the bottleneck was the real world: deployment time, runtime, access to hardware. Without x86 macOS, Linux, or Windows boxes, I cut scope: native image Homebrew package for macOS ARM (my main platform), and an executable image via jlink for other platforms. Even that was painful on Windows, where builds kept failing and GPTs kept getting confused by basic shell automation. Final plan: native image for macOS ARM (done), jlink package (JRE + JAR) for macOS x64 and Linux arm64, and just the JAR for Windows. Time to stop and get back to reality.

Release It!

Stopping is harder than it sounds in the world of code assistants. When tools get more powerful every day—and solve problems that look as hard as the ones left—it always feels like the happy ending is one prompt away. But our complexity isn’t LLM complexity, and vacations are short.

So I stopped. I polished the code, ran multiple reviews, improved the docs. The functionality was good enough for me. I’d done what I set out to do at the beginning of those two weeks. And somewhere along the way, I learned to stop worrying and love to vibe.

Learning to Stop Worrying and Love the Vibe


The post is a reflection based on building obsidize, I wrote more about it in:

  1. Scratching the Vibe – How I Learned to Stop Worrying and Love the Vibe
  2. Scratching the Vibe – The Itch
  3. Scratching the Vibe – The Scratch
  4. Scratching the Vibe – The Vibe

I’ve been fascinated by computers—and what they can be taught to do—since my childhood in the ’80s. It always seemed like with the right secret incantation you could open portals to new worlds, new possibilities; you could gain superpowers if only you knew how to conjure the magic.

I wanted this power, and I admired those who had it. It seemed a skill difficult to master, a skill arriving from the future, a skill the grown-ups around me struggled with—which made it all the more desirable.


My first programming experiences came in the early 1990s on clones of the Z80/ZX Spectrum. I used BASIC to draw and animate simple geometric shapes, or later to write (well, copy) games published as text listings in the back of PC Report magazine.

My high school diploma project (Assistant Programming Analyst) was a FoxPro clone of Norton Commander. My engineering diploma project was a C/C++ disk defragmentation application.

FoxPro, disk defragmentation, BASIC (not to mention Pascal, MFC, CORBA, and others): I watched each of these fade into obsolescence during my career. Yet in a way, the essence of programming didn’t change. That first code I wrote 35 years ago—reading input, initializing state, controlling flow, calling library functions—that was programming. That was software. Until 2022.


By then, my understanding of what makes software successful had already shifted. While writing code remained the only way to “teach your computer to do something,” the viability of that “something”—its relevance, cost, and chance of survival—was less about coding and more about capturing real business needs. It was about growing solutions that could evolve with a changing domain, team, or organization (or better yet, growing the right team to build the right solution). I’ve seen too many thousands of lines of clean, elegant code wasted by bad requirements, misaligned business goals, dysfunctional organizations, clueless management. In that world, being “code complete” was just one small step.

Software engineering is about solving real problems with software, in the most cost-effective way possible. And in the projects I worked, the main problems were rarely the code itself. I found myself focusing more on process, specification, and communication.

And then “teaching your computer to do something” changed.


Andrej Karpathy coined the term “vibe coding” to describe letting large language models drive implementation. He called it “fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists.” In this style, the “assistant” is no longer just an assistant: it builds functionality, proposes architectures, suggests next features. The human supplies guidance, feedback, and high-level direction.

On obsidize, I set out to lean on coding assistants as far as they could go. And they can go surprisingly far. By the end, although I spent many hours testing and debugging, I can’t honestly say I’ve read every line of code in the repository.

Of course, I ran black-box end-to-end tests—the “acceptance testing.” I read and reworked the documentation the assistants generated. I zoomed in on specific namespaces when needed, wrote some code, requested features, made architectural decisions. I decided what to implement and when to stop. But I didn’t read every line. I didn’t write (or review) every test.

Does that make me just a responsible babysitter for the code assistants? Was I just vibing, or was I doing proper “AI-assisted engineering”?

In human teams, you’re not expected to read all the code either—not in large projects. You trust your teammates, and that trust is reinforced with conventions, reviews, tests, demos, documentation. Code assistants are new, and the level of implicit trust they deserve is still being defined. But similar checks are emerging in the LLM world: multiple personas to review code from different angles (something Claude-Code already implements), security reviews, chained tasks, and structured implementation phases.

Using code assistants means you’re not only writing code—but you’re not a pure product owner either. Oversight is tighter. You manage direction, but you must also stay close to the code, watch the repo carefully (ready to revert an agent’s mess), validate changes. And unlike with humans, you must be willing to discard huge amounts of work—hundreds of lines, entire modules. Code is cheap now. Agents don’t take ownership of their mistakes. You can’t let sloppy code slip by hoping someone else will fix it later. It’s better to throw it away, redefine the task, and go again. The agent won’t be offended.

That raises another question: if code is cheap, if agents write most of it, does the choice of programming language matter? I think it does. You still need to read it, debug it, play with it when the AI stalls in its stochastic echo chamber. Languages that are well-designed, consistent, and concise may gain an edge. Clojure, for example, is known for brevity (good for context windows and tokens), functional purity, and explicit state handling. And with the Clojure-MCP server, Claude can interact directly with a live REPL—iterating like an experienced developer.


Are coding assistants silver bullets for software development?

It depends on the monster you’re trying to slay.

If you need to churn code faster, or tackle technologies outside your core competence, then yes: they help. I had never finished a production-ready personal project before. I always got stuck on deployment and packaging, lost interest when forced to learn some necessary but boring technology. With AI teammates, I got further than ever.

For companies that see software as a manufacturing task—cranking out features and polishing wheels—assistants will boost productivity. More stuff will be built.

For companies in novel domains, where constraints come from the business environment, assistants can accelerate iteration, get you to working prototypes, let you test hypotheses quickly. Not a silver bullet, but a powerful tool.


In the late ’90s, Eric S. Raymond experimented with a then-new paradigm: the Linux model, open source, massively collaborative, free. That movement powered the cloud and SaaS revolutions. At the time, critics worried free software would kill developer jobs, handing value to corporations. And maybe it did: those $8.8 trillion in open-source software didn’t land in contributors’ pockets, but they did lower corporate costs. At the same time, free software spawned countless small companies that didn’t need to reinvent the wheel.

Did open source create opportunities for software engineers, or take them away? In the early 2000s, you were still expected to implement your own circular buffers, even your own network stack. Today, most software development is an integration exercise of open-source standards. Writing from scratch is a “code smell.” The building blocks changed.

Now the granularity shifts again. Coding assistants—trained on hundreds of millions of open-source lines—are the new layer of abstraction. They integrate the standard industry tools for you, ready to use.

Fed by free and open-source software, we are approaching an uncanny version of Richard Stallman’s vision: software nearly free as in beer, but not free as in freedom. Everyone can create software. Few get paid for it. The agribusiness of code, controlled by a handful of AI companies.

Some see this as the end of the software engineer. I don’t. Those critics have always underestimated what engineering really is. Software engineering has always been about more than code. And maybe, now that code itself is less central, the other parts of the craft—problem framing, communication, iteration, validation—will finally get their due.


Obsidize your Claude conversations

This summer I built obisidize: a command line tool that imports Claude conversations and projects as notes into Obisidan.

Obsidize is a small summer vacation project that allowed me to experiment with new ways of building software and has a few nice features:

  • 🔄 Incremental Updates: Detection of new and updated content – only processes what’s changed
  • 🗂️ Structured Output: Creates an organized, “Obsidian friendly” folder structure for your conversations and projects
  • 🏷️ Custom Tagging: Allows adding custom Obsidian tags and links to imported content via the command line
  • 🔄 Sync-Safe: doesn’t use any local/external state files so the incremental updates can be run from different devices
  • 🔍 Dry Run Mode: Allows the preview of changes before applying 

If you are looking for a way to backup your conversations from Claude to Obsidian you should take a look.

It can be installed using Homebrew on MacOS and Linux or using its “universal jar” on any platform with a JRE installed (Java 21+). Check the release page.

I wrote more about how I got to do this project here.

A happy Parisian mime is getting obisidized

A day at Voxxed Days Luxembourg 2024

The big boys have their Devoxxes and KubeCons, Luxembourg has the Voxxed Days Luxembourg conference which, as we like to think about a lot of events here in Luxembourg, it is for sure smaller but maybe it is also a bit more refined… maybe.

Thursday, on 21st of June 2024, was the first day of Voxxed Days Luxembourg 2024 – the main local software conference organized by the Java User Group of Luxembourg – and what follows are my notes from this day.

1.

Showing the importance of the space industry for the Luxembourg software community(or just as a confirmation that space tech is cool) the conference was kick-started by Pierre Henriquet’s keynote Les robots de l’espace

… It’s true that – to bring the audience with their feet on the ground – software is mentioned twice as the thing that killed a robot in space: first on 13th November 1982 when the end of Viking 1 mission was brought by a software update and then in 1998 when a misalignment between the implementations provided by different NASA suppliers killed the Mars Climate Orbiter.

Still, the talk ends on a positive note for the viability of software in space: last week the Voyager 1 team managed to update the probe’s software to recover the data from all science instruments: an update deployed 24 billion kilometers away on hardware that has been traveling through space for 47 years.

2.

I stay in the main room where Henri Gomez presents “SRE, Mythes vs Réalités” a talk that (naturally) refers back heavily to Google’s “SRE book” – the book that kickstarted the entire label.

For me the talk confirms again that SRE for a lot of companies is either the old Ops but with some new language or a label to put on everything they do.

My personal view on SRE is that (similar to the security “shift left” that brought us DevSecOps) it is about mindset, tools and techniques and not about a job position… unfortunately (like for DevSecOps), organisations find it easier to adopt the label as a role than to actually understand the mindset shift, new tools, processes and techniques required.

First mention of the day of the Accelerate book, the DORA Metrics and the 4 Software Delivery Performance metrics.

3.

Next, I went to a smaller conference room to see the “How We Gained Observability Into Our CI/CD Pipeline” talk done by Dotan Horovits.

This was the second mention of the DORA metrics:

One of the main ideas of the talk was that the CI/CD pipelines are part of the production environment and should be monitored, traced, and analysed like production systems for issues and performance. It should not be a new idea: if what you produce is software, then your production pipeline is crucial, but the reality is that many companies still treat the CI/CD pipeline as part of something that is mainly Dev team responsibility, with the freedoms and risks that this entails..

Dotan goes into some detail on how they instrumented their pipelines :

  1. collect data on ci/cd pipeline run into the environment variables
    • create a summary step on the pipeline to collect all the info from the pipeline and store it in elastic search
  2. visualise with kibana/opensearch dashboards:
    • define your measuring needs – what you want to find/track?
    • where the failure happened (branch/machine)
    • what is the duration of each step
    • are there steps that take more time than the baseline?
  3. monitor the actual infrastructure (computers, CPUs, etc.) used to run your CI/CD pipelines using Telegraf, Prometheus. For more info check the guide

The talk includes even a short but effective introduction in Distributed Tracing and OpenTelemetry that are used for increasing visibility at every layer (OpenTelemetry was another frequent theme in the talks at the conference – and Dotan is well positioned to talk about it as he runs the OpenObservability Talks podcast)

4.

I checked then the Demystifion le functionnement interne de Kubernetes talk. It was actually a demo done by Denis Germain on how you can deploy a Kubernetes cluster one control plane brick at a time.

As somebody who was thrown into using and working with Kubernetes without having time to thoroughly understand its internals I approached this session as a student and took the opportunity to revisit basic K8s concepts.

A note on the general structure of an K8s resource:

- apiVersion: <>/v1
- metadata:
    - labels:
        - key: value
    - name: <>
- kind: Deployment/Service/Ingress/...
    - spec
        - replicas: x

A cool new (for me) tool to manipulate digital certificates: CFSSL, and the fact that for ingress controller traefik was preferred..

The Kubernetes Control Plane components deployed during this short talk:

  • Kube API Server: the entry point into the infrastructure
  • Kube Controller Manager: the binary that launches control loops on all the apis that exist in k8s to manage the difference between current state and target state.
  • Kube Scheduler: the planner/workflow manager: the one that actually creates the jobs that are required by the kube controller manager to align the target state with the current state
  • Kubelet: the agent on the node: it’s the one that actually executes the tasks planned by the kube scheduler. One of the first things it does is to register itself (and the node) on kube api -> containerd is the container engine that actually initiates the container
  • CNI container network is the element that manages the communication between container

5.

Next: Réagir à temps aux menaces dans vos clusters Kubernetes avec Falco et son écosystème with Rachid Zarouali and Thomas Labarussias.


What I found:

  • Falco is made by Sysdig – and that Sysdig is founded by Loris Degioanni. My first main professional achievement was moving a network stack (re-implement a subset of ARP, IP, UDP protocols) on winpcap: using winpcap to both “sniff” ethernet packets and also to transmit them with high throughput and precise timing (NDIS drivers… good times!). Winpcap was the base of Ethereal which then become WireShark, and which now continues to live in Falco… I never expected to find echoes of my career beginnings here.
  • Falco is (now?) eBPF-based, supporting older kernel versions (even pre-kubernetes) which makes sense with its lineage that can be traced to libpcap
  • Falco is a newly (2024) CNCF Graduated Project
  • Falco plugins are signed with cosign
  • falcosidekick – plugin adds additional functionality – created by Thomas Labarussias.

Integrating all the Falco ecosystem you get: detection -> notification -> falcosidekick (actions?)-> reaction (falco-talon)

  • Falco Talon is a response engine (you could implement an alternative response engine with Argo CD but Falco Talon has the potential of better integration): github.com/falco-talon/falco-talon

Side Note: there is a cool K8s UI tool that was used by Thomas Labarussias during the demo: https://k9scli.io

6.

Next to: Becoming a cloud-native physician: Using metric and traces to diagnose our cloud-native applications (Grace Jansen – IBM)

The general spirit of the talk was that in order to be able to correctly diagnose the behavior of your distributed application(s) you always need: context/correlation + measurements.

The measurements are provided using:

  • instrumentation – instrument systems and apps to collect relevant data:
    • metrics – talk emphasised the use of the microprofile.io : an open source community specification for Enteprise Java – “Open cloud-native Java APIs”
    • traces,
    • logs
    • (+ end-user monitoring + profiling)

Storing the measurements is important: send the data to a separate external system that can store and analyse it – Jaeger, Zipkin

Visualize/analyze: provide visualisations and insights into systems as a whole (Grafana?)

Back to Distributed Tracing and OpenTelemetry:

A trace contains multiple spans, a spans contains: span_id, a name, time related data, log messages and metadata to give information about what occurs during a transaction and the context an immutable object contained in the span data to identify the unique request that each span is part of: trace_id, parent_id.

The talk ended with a demo of how all these elements come together in OpenLiberty (OpenLiberty is the new, improved, stripped-down WebSphere).

Side Note: an interesting online demo/IDE tool – Theia IDE.

7.

Next: Au secours ! Mon manager me demande des KPIs ! by Geoffrey Graveaud

The third talk mentioning Dora metrics, 2nd that mentions the Accelerate book, the first (and last) that mentions Team Topologies.


The talk adopted a “star wars inspired” story format (which resonates I guess with all the Project Unicorn, Phoenix, etc. readers interested in agile/lean processes). I usually find this style annoying but this time it was well done and Geoffrey’s acting skills are really impressive.

The problem is classical: the organisation (the top) asks some KPIs to measure the software development performance that are artificial when projected onto the real life of the team (even if in this case the top is enlightened enough to be aware of the DORA metrics)

The solution is new: instead of going on the usual “the top/business doesn’t understand so explain them or ignore them” – what Geoffrey offers is: look into your real process, measure the elements that are available and part of your reality and correlate them with the requested KPIs (or, in any case this is my interpretation) – even if they will not fully answer to the request.

Side Note: the sandwich management method

Conclusion

From the talks I attended is emerging a general pattern of renewed focus on measurement: for processes (DORA) and for software (OpenTelemetry).

I’d like to think this focus is driven by the de fact that Agile’s principle of Inspect and Adapt becomes second nature in the industry (but is hard to defend this theory in practice knowing how few true Agile organisations are around…)

Of course, for me, Voxxed Days is more than talks and technology – it’s about the people, it’s about meeting familiar faces – people you worked with or just crossed at a meet-up in the city and share the same passion for technology, the software community and the society we are building through our work.

In summary: very good first day – I learned useful things and I’m happy to see that the DORA metrics, the Accelerate book and even really new(2019!) ideas like Team Topologies are finally making inroads into the small corner of the global software industry that Luxembourg represents!

A Quick Look at EBIOS Risk Manager

What is EBIOS RM?

EBIOS Risk Manager (EBIOS RM) is the method published by the French National Cybersecurity Agency(ANSSI – “Agence nationale de la sécurité des systèmes d’information”) for assessing and managing digital risks(EBIOS: “Expression des Besoins et Identification des Objectifs de Sécurité” can be translated as “Description of Requirements and Identification of Security Objectives”) developed and promoted with the support of Club EBIOS (a French non-profit organization that focuses on risk management, drives the evolution of the method and proposes on its website a number of helpful resources for implementing it – some of them in English).

EBIOS RM defines a set of tools that can be adapted, selected and used depending to the objective of the project, and is compatible with the reference standards in effect, in terms of risk management (ISO 31000:2018) as well as in terms of cybersecurity (ISO/IEC 27000).

Why is it important?

Why use a formal method for your (cyber)security risk analysis and not just slap the usual cybersecurity technical solutions (LB + WAF + …) on your service?

On a (semi)philosophical note – because the first step to improvement is to start from a known best practice and then define and evolve your own specific process.

Beyond the (semi)philosophical reasons are then the very concrete regulations and certifications you may need to implement right now, and the knowledge that in the future the CRA regulation will require cybersecurity risk analysis (and proof of) for all digital products and services offered on EU market.

OK, so it is important: lets go to the next step:

How is it used?

First a few concepts

In general the target of any risk management /cybersecurity framework is to guide the organization’s decisions and actions in order to best defend/prepare itself.

While risk/failure analysis is something we all do natively, any formal practice needs to start by defining the base concepts: risk, severity, likelihood, etc.

Risk and its sources:

ISC2 – CISSP provides these definitions:

  • Risk is the possibility or likelihood that a threat will exploit a vulnerability to cause harm to an asset and the severity of damage that could result
  • a threat is a potential occurrence that may cause an undesirable or unwanted outcome for an organization or for an asset.
  • asset is anything used in a business process or task

One of the first formal methods to deal with risk was FMEA: Failure Modes, Effects and criticality Analysis that started to be used/defined in the 1940s (1950s?) in US (see wikiipedia). This is one of the first places where the use of broad severity(not relevant/ very minor/ minor/ critical/ catastrophic) and likelihood(extremely unlikely/ remote/ occasional/ reasonably possible/ frequent) categories have been defined.

ANSSI defines 4 levels of severity in EBIOS RM:

G4 – CRITICAL – Incapacity for the company to ensure all or a portion of its activity, with possible serious impacts on the safety of persons and assets. The company will most likely not overcome the situation (its survival is threatened).

G3 – SERIOUS – High degradation in the performance of the activity, with possible significant impacts on the safety of persons and assets. The company will overcome the situation with serious difficulties (operation in a highly degraded mode).

G2 – SIGNIFICANT – Degradation in the performance of the activity with no im- pact on the safety of persons and assets. The company will overcome the situation despite a few difficulties (operation in degraded mode).

G1 – MINOR – No impact on operations or the performance of the activity or on the safety of
persons and assets. The company will overcome the situation without too many difficulties (margins will be consumed).

ANSSI defines 4 levels of likelihood:

V4 – Nearly certain – The risk origin will certainly reach its target objective by one of the considered methods of attack. The likelihood of the scenario is very high.

V3 – Very likely – The risk origin will probably reach its target objective by one of the considered methods of attack. The likelihood of the scenario is high.

V2 – Likely – The risk origin could reach its target objective by one of the consi- dered methods of attack. The likelihood of the scenario is significant.

V1 – Rather unlikely. The risk origin has little chance of reaching its objective by one of the considered methods of attack. The likelihood of the scenario is low.

ANSSI defines some additional concepts:

Risk Origins (RO – this is similar to Threat Agent/Actor in ISC2 terminology) – something that potentially could exploit one ore more vulnerabilities.
Feared Events (FE – this is equivalent to Threats in ISC2 terminology)
Target Objectives(TO): the end results sought over by a Threat Agent/Actor

A side note : quantitative analysis

ISC2 – CISSP recommends using quantitative analysis for risk qualification:


Getting there requires to qualify your asset value or at least how much a risk realisation would cost you (Single Loss Expectancy) and then compute an annual loss so that you can compare rare events but costly with smaller but more frequent.

I think the two methods are compatible as nothing stops you to define afterwards some thresholds that map the value numbers to a severity class (eventually depending not only on the ALE but also on your budget/risk appetite/risk aversion)

A process of discovery

The risk management methods are at their core similar and all contain a number of steps that help establish: what is that you need to protect, what could happen to it, and what could be done to make sure the effects of what ever happen are managed (or at least accepted).

So the steps are in general (with some variance on the order and emphasis) :

  • identify assets (data, processes, physical)
  • identify vulnerabilities associated to your assets
  • identify the threats that exist in your operative environment (taking into account your security baseline)
  • identify the risks and prioritise action related to them based on their likelihood and severity
    ..cleanse and repeat.

To help with this process EBIOS RM defines 5 workshops: each one with expected inputs, outputs and participants:

Workshop 1:

Workshop 2:

Workshop 3:


Strategic scenario:

  • a potential attack with the system as a blackbox: how the attack will happen “from the exterior of the system”

Workshop 4:


Operational/Operative Scenarios – identify and describe potential attack scenarios corresponding to the strategic ones, eventually using tools like: STRIDE, OWASP , MITRE ATT&CK, OCTAVE, Trike, etc.

Workshop 5:

Risk Treatment Strategy Options (ISO27005/27001):

  • Avoid (results in a residual risk = 0) – change the context that gives rise to the risk
  • Modify (results in a residual risk > 0): add/remove or change security measures in order to decrease/modify the risk (likelihood and/or severity)
  • Share or Transfer (results in a residual risk that can be zero or greater : involve an external partner/entity (e.g. insurance)
  • Accept (the residual risk stays the same as the original risk)

In Summary:

In Conclusion:

EBIOS RM is a useful tool in the cybersecurity management, aligned with the main cybersecurity tools and frameworks.

There is also enough supporting open access materials (see ANSSI and Club EBIOS ) that help conduct and produce the required artefacts at each step of the process: templates, guides, etc. – which make it a primary candidate for adoption in organisations without an already established cybersecurity risk management practice.

The DevOps adventure – my review of “The Unicorn Project”

#devops #agile #lean

Subtitle: A Novel about Developers, Digital Disruption, and Thriving in the Age of Data
Authors: Gene Kim
Bought: 30 July 2020


Another book from Gene Kim, The Unicorn Project continues to promote DevOps and agile methods as the solutions to the world’s IT (and corporate) ills, similar to the other books by the author. Besides The Phoenix Project, Gene Kim is known (at least) for The DevOps Handbook and Accelerate (written together with Nicole Forsgren and Jez Humble).

The book re-uses the novel format introduced by The Phoenix Project to trace the archetypal trajectory of a DevOps transformation—the enterprise being the fictional “Parts Unlimited” company (another reference to The Goal?)—a company that displays all the evils of a big “legacy company” in which words like ITIL and TOGAF have replaced the beating entrepreneurial heart and poisoned the spirit of the good IT people (/insert sad violin/).

The book is split into three parts:

  • Part 1 (Chapter 1 – Chapter 7) presents the state of crisis in which “Parts Unlimited” finds itself: losing market share, an IT culture dominated by bureaucracy in which the actual IT specialists (Dev, Ops, QAs) have become mindless zombies accepting the absurdity of corporate decay: infinite request chains, roadblocks, pervasive blame games, etc. In these dark times, there is a light of hope: a few engineers that have refused to sell out their dreams and that are organizing “the resistance” to defend IT ideals.
  • Part 2 (Chapter 8 – Chapter 13) shows the resistance becoming visible and being embraced by (some) management; the new hope spreads.
  • Part 3 (Chapter 14 – Chapter 19) depicts the final battle and the happy aftermath.

The DNA of the change (and of the perfect organization) consists of “The Five Ideals”:

  1. Locality and Simplicity
  2. Focus, Flow, and Joy
  3. Improvement of Daily Work
  4. Psychological Safety
  5. Customer Focus

What shocked me is the total dedication that the Devs, Ops, and QAs have to this failed company: Maxine and her co-conspirators spend almost 100% of their (diminished) waking hours working to transform Parts Unlimited, to take responsibility for production deployments, to set up (in rogue mode) servers and deployment processes alike. Maxine’s family is presented as a passive background actor: something that is happening around her while she toils at the corporate laptop to improve the corporation’s IT.

The story brushes off, without much consideration, the security and compliance implications of direct production deployment by the DevOps team. It minimizes the human and logistic cost of operating and supporting high availability services: in The Unicorn Project, the DevOps team is happy to take responsibility for maintaining and supporting the services themselves—no worry for the on-call requirements, for wearing a pager (without any compensation).

In the end, the “rogue” aspect of the corporate transformation and especially its dependency on the employees’ readiness for seemingly endless sacrifice of self and of family time is the most puzzling and self-defeating aspect of the (assumed) objective of promoting the DevOps transformation path.

On one side, it makes the whole story less relevant to anybody looking for ways to start on the DevOps transformation path: any change is viable if you have access to the best people and if these people are willing to provide you endless time and effort… but this is never the case. In my experience, in a company like “Parts Unlimited,” you’ll soon discover that the best people have already left; those that stay around are mainly there because they have concluded that the way things are is… acceptable for some reason: job security, predictability, etc.

On the other side (and in my view, this is the worst part of The Unicorn Project): this is not needed. The “DevOps transformation” is possible by setting clear intermediary steps—steps that have an immediate advantage for the people involved and the company (e.g., fewer incidents in production, thus less personal impact).

By clear communication of the thinking behind the decisions, transparent tracking of the results to grow confidence, and by showing the team a sustainable path towards balancing private life and professional excellence, you don’t need a war—you need diplomacy, leadership, and knowing what you are doing.

Exploring the origins of Lean – my review of “The Goal”

lean #lean-through-manufacture #business-novel

Why I read it

I found references to it in Lean Enterprise (Jez Humble) and also it was mentioned by Jez in a talk I saw on youtube (maybe this one).

What is about

The road of discovery traveled by the manager of an embattled factory in US to finally realise the [[lean]] enterprise principles!

Similar to [[The Phoenix Project]] (which I guess is inspired from it) a hard working family man is forced to discover the essence of lean in order to save his factory – in the process re-connecting with his team and family… (no kidding!)

What I took from it

Start with identifying the bottlenecks in your delivery process.

The bottlenecks are those activities/resources that necessary for an end product but that have a throughput lower than the one expected throughput of the end product. Meaning: if you need to deliver 5 things per day, and you have some part of your process that can only do 3 sub-things per day – that’s a bottleneck. Everything else that has the capacity to deliver stuff higher than your expected output is a non-bottleneck (duh!)…

The goal would be to have the level of activity throughput equal with expected delivery throughput – not higher, not lower. Which requires: increasing the throughput through the bottlenecks and, eventually, decreasing the throughput through the non bottlenecks to limit the “work in progress/inventory”.

After each system change/capacity re-balancing the process needs to be re-evaluated in order to identify – potentially new bottlenecks and restart. A lot of emphasis is put on the fact that accelerating the steps/processes that are not bottlenecks, or even allowing them to go at their “natural speed” and thus faster than the bottlenecks is wasteful as it doesn’t contribute to the increase of the market throughput. So in a way, the insight is that you need to block the work being done that is not aligned with the delivery… and for me an immediate significant realisation was that Kanban most of all a system of blocking creation of wasteful work .

One way to do this re-balancing would be to flag the work that involves the bottlenecks in a way that gives it higher priority compared with the rest.

Another way is to lower the granularity of the work being done on the ‘non bottleneck’ processes so that they respond faster to the bottleneck requests.

A point made in the book is that this will increase waste as smaller granularity implies more context switching (“set-ups”) – but as long as this waste still doesn’t transform the process in a “bottleneck process” it’s ok.

What I thought

Well, it does make clear (through all sorts of examples, stories, etc.) how important it is to align all your actions/outputs with the business goal… but I kind of hate this type of story with the guy who sacrifices his family and personal time to fight agains a (broken) system imposed by “the corporation”.

Trivia

“garbage in, garbage out” : appears in this book which is from 1984 and about factory parts — after some research (wikipedia) it seems this expression (just like “having a bug”) is another gift that software engineering gave tot he world!