A Quick Look at EBIOS Risk Manager

What is EBIOS RM?

EBIOS Risk Manager (EBIOS RM) is the method published by the French National Cybersecurity Agency(ANSSI – “Agence nationale de la sécurité des systèmes d’information”) for assessing and managing digital risks(EBIOS: “Expression des Besoins et Identification des Objectifs de Sécurité” can be translated as “Description of Requirements and Identification of Security Objectives”) developed and promoted with the support of Club EBIOS (a French non-profit organization that focuses on risk management, drives the evolution of the method and proposes on its website a number of helpful resources for implementing it – some of them in English).

EBIOS RM defines a set of tools that can be adapted, selected and used depending to the objective of the project, and is compatible with the reference standards in effect, in terms of risk management (ISO 31000:2018) as well as in terms of cybersecurity (ISO/IEC 27000).

Why is it important?

Why use a formal method for your (cyber)security risk analysis and not just slap the usual cybersecurity technical solutions (LB + WAF + …) on your service?

On a (semi)philosophical note – because the first step to improvement is to start from a known best practice and then define and evolve your own specific process.

Beyond the (semi)philosophical reasons are then the very concrete regulations and certifications you may need to implement right now, and the knowledge that in the future the CRA regulation will require cybersecurity risk analysis (and proof of) for all digital products and services offered on EU market.

OK, so it is important: lets go to the next step:

How is it used?

First a few concepts

In general the target of any risk management /cybersecurity framework is to guide the organization’s decisions and actions in order to best defend/prepare itself.

While risk/failure analysis is something we all do natively, any formal practice needs to start by defining the base concepts: risk, severity, likelihood, etc.

Risk and its sources:

ISC2 – CISSP provides these definitions:

  • Risk is the possibility or likelihood that a threat will exploit a vulnerability to cause harm to an asset and the severity of damage that could result
  • a threat is a potential occurrence that may cause an undesirable or unwanted outcome for an organization or for an asset.
  • asset is anything used in a business process or task

One of the first formal methods to deal with risk was FMEA: Failure Modes, Effects and criticality Analysis that started to be used/defined in the 1940s (1950s?) in US (see wikiipedia). This is one of the first places where the use of broad severity(not relevant/ very minor/ minor/ critical/ catastrophic) and likelihood(extremely unlikely/ remote/ occasional/ reasonably possible/ frequent) categories have been defined.

ANSSI defines 4 levels of severity in EBIOS RM:

G4 – CRITICAL – Incapacity for the company to ensure all or a portion of its activity, with possible serious impacts on the safety of persons and assets. The company will most likely not overcome the situation (its survival is threatened).

G3 – SERIOUS – High degradation in the performance of the activity, with possible significant impacts on the safety of persons and assets. The company will overcome the situation with serious difficulties (operation in a highly degraded mode).

G2 – SIGNIFICANT – Degradation in the performance of the activity with no im- pact on the safety of persons and assets. The company will overcome the situation despite a few difficulties (operation in degraded mode).

G1 – MINOR – No impact on operations or the performance of the activity or on the safety of
persons and assets. The company will overcome the situation without too many difficulties (margins will be consumed).

ANSSI defines 4 levels of likelihood:

V4 – Nearly certain – The risk origin will certainly reach its target objective by one of the considered methods of attack. The likelihood of the scenario is very high.

V3 – Very likely – The risk origin will probably reach its target objective by one of the considered methods of attack. The likelihood of the scenario is high.

V2 – Likely – The risk origin could reach its target objective by one of the consi- dered methods of attack. The likelihood of the scenario is significant.

V1 – Rather unlikely. The risk origin has little chance of reaching its objective by one of the considered methods of attack. The likelihood of the scenario is low.

ANSSI defines some additional concepts:

Risk Origins (RO – this is similar to Threat Agent/Actor in ISC2 terminology) – something that potentially could exploit one ore more vulnerabilities.
Feared Events (FE – this is equivalent to Threats in ISC2 terminology)
Target Objectives(TO): the end results sought over by a Threat Agent/Actor

A side note : quantitative analysis

ISC2 – CISSP recommends using quantitative analysis for risk qualification:


Getting there requires to qualify your asset value or at least how much a risk realisation would cost you (Single Loss Expectancy) and then compute an annual loss so that you can compare rare events but costly with smaller but more frequent.

I think the two methods are compatible as nothing stops you to define afterwards some thresholds that map the value numbers to a severity class (eventually depending not only on the ALE but also on your budget/risk appetite/risk aversion)

A process of discovery

The risk management methods are at their core similar and all contain a number of steps that help establish: what is that you need to protect, what could happen to it, and what could be done to make sure the effects of what ever happen are managed (or at least accepted).

So the steps are in general (with some variance on the order and emphasis) :

  • identify assets (data, processes, physical)
  • identify vulnerabilities associated to your assets
  • identify the threats that exist in your operative environment (taking into account your security baseline)
  • identify the risks and prioritise action related to them based on their likelihood and severity
    ..cleanse and repeat.

To help with this process EBIOS RM defines 5 workshops: each one with expected inputs, outputs and participants:

Workshop 1:

Workshop 2:

Workshop 3:


Strategic scenario:

  • a potential attack with the system as a blackbox: how the attack will happen “from the exterior of the system”

Workshop 4:


Operational/Operative Scenarios – identify and describe potential attack scenarios corresponding to the strategic ones, eventually using tools like: STRIDE, OWASP , MITRE ATT&CK, OCTAVE, Trike, etc.

Workshop 5:

Risk Treatment Strategy Options (ISO27005/27001):

  • Avoid (results in a residual risk = 0) – change the context that gives rise to the risk
  • Modify (results in a residual risk > 0): add/remove or change security measures in order to decrease/modify the risk (likelihood and/or severity)
  • Share or Transfer (results in a residual risk that can be zero or greater : involve an external partner/entity (e.g. insurance)
  • Accept (the residual risk stays the same as the original risk)

In Summary:

In Conclusion:

EBIOS RM is a useful tool in the cybersecurity management, aligned with the main cybersecurity tools and frameworks.

There is also enough supporting open access materials (see ANSSI and Club EBIOS ) that help conduct and produce the required artefacts at each step of the process: templates, guides, etc. – which make it a primary candidate for adoption in organisations without an already established cybersecurity risk management practice.

The DevOps adventure – my review of “The Unicorn Project”

#devops #agile #lean

Subtitle: A Novel about Developers, Digital Disruption, and Thriving in the Age of Data
Authors: Gene Kim
Bought: 30 July 2020


Another book from Gene Kim, The Unicorn Project continues to promote DevOps and agile methods as the solutions to the world’s IT (and corporate) ills, similar to the other books by the author. Besides The Phoenix Project, Gene Kim is known (at least) for The DevOps Handbook and Accelerate (written together with Nicole Forsgren and Jez Humble).

The book re-uses the novel format introduced by The Phoenix Project to trace the archetypal trajectory of a DevOps transformation—the enterprise being the fictional “Parts Unlimited” company (another reference to The Goal?)—a company that displays all the evils of a big “legacy company” in which words like ITIL and TOGAF have replaced the beating entrepreneurial heart and poisoned the spirit of the good IT people (/insert sad violin/).

The book is split into three parts:

  • Part 1 (Chapter 1 – Chapter 7) presents the state of crisis in which “Parts Unlimited” finds itself: losing market share, an IT culture dominated by bureaucracy in which the actual IT specialists (Dev, Ops, QAs) have become mindless zombies accepting the absurdity of corporate decay: infinite request chains, roadblocks, pervasive blame games, etc. In these dark times, there is a light of hope: a few engineers that have refused to sell out their dreams and that are organizing “the resistance” to defend IT ideals.
  • Part 2 (Chapter 8 – Chapter 13) shows the resistance becoming visible and being embraced by (some) management; the new hope spreads.
  • Part 3 (Chapter 14 – Chapter 19) depicts the final battle and the happy aftermath.

The DNA of the change (and of the perfect organization) consists of “The Five Ideals”:

  1. Locality and Simplicity
  2. Focus, Flow, and Joy
  3. Improvement of Daily Work
  4. Psychological Safety
  5. Customer Focus

What shocked me is the total dedication that the Devs, Ops, and QAs have to this failed company: Maxine and her co-conspirators spend almost 100% of their (diminished) waking hours working to transform Parts Unlimited, to take responsibility for production deployments, to set up (in rogue mode) servers and deployment processes alike. Maxine’s family is presented as a passive background actor: something that is happening around her while she toils at the corporate laptop to improve the corporation’s IT.

The story brushes off, without much consideration, the security and compliance implications of direct production deployment by the DevOps team. It minimizes the human and logistic cost of operating and supporting high availability services: in The Unicorn Project, the DevOps team is happy to take responsibility for maintaining and supporting the services themselves—no worry for the on-call requirements, for wearing a pager (without any compensation).

In the end, the “rogue” aspect of the corporate transformation and especially its dependency on the employees’ readiness for seemingly endless sacrifice of self and of family time is the most puzzling and self-defeating aspect of the (assumed) objective of promoting the DevOps transformation path.

On one side, it makes the whole story less relevant to anybody looking for ways to start on the DevOps transformation path: any change is viable if you have access to the best people and if these people are willing to provide you endless time and effort… but this is never the case. In my experience, in a company like “Parts Unlimited,” you’ll soon discover that the best people have already left; those that stay around are mainly there because they have concluded that the way things are is… acceptable for some reason: job security, predictability, etc.

On the other side (and in my view, this is the worst part of The Unicorn Project): this is not needed. The “DevOps transformation” is possible by setting clear intermediary steps—steps that have an immediate advantage for the people involved and the company (e.g., fewer incidents in production, thus less personal impact).

By clear communication of the thinking behind the decisions, transparent tracking of the results to grow confidence, and by showing the team a sustainable path towards balancing private life and professional excellence, you don’t need a war—you need diplomacy, leadership, and knowing what you are doing.

Exploring the origins of Lean – my review of “The Goal”

lean #lean-through-manufacture #business-novel

Why I read it

I found references to it in Lean Enterprise (Jez Humble) and also it was mentioned by Jez in a talk I saw on youtube (maybe this one).

What is about

The road of discovery traveled by the manager of an embattled factory in US to finally realise the [[lean]] enterprise principles!

Similar to [[The Phoenix Project]] (which I guess is inspired from it) a hard working family man is forced to discover the essence of lean in order to save his factory – in the process re-connecting with his team and family… (no kidding!)

What I took from it

Start with identifying the bottlenecks in your delivery process.

The bottlenecks are those activities/resources that necessary for an end product but that have a throughput lower than the one expected throughput of the end product. Meaning: if you need to deliver 5 things per day, and you have some part of your process that can only do 3 sub-things per day – that’s a bottleneck. Everything else that has the capacity to deliver stuff higher than your expected output is a non-bottleneck (duh!)…

The goal would be to have the level of activity throughput equal with expected delivery throughput – not higher, not lower. Which requires: increasing the throughput through the bottlenecks and, eventually, decreasing the throughput through the non bottlenecks to limit the “work in progress/inventory”.

After each system change/capacity re-balancing the process needs to be re-evaluated in order to identify – potentially new bottlenecks and restart. A lot of emphasis is put on the fact that accelerating the steps/processes that are not bottlenecks, or even allowing them to go at their “natural speed” and thus faster than the bottlenecks is wasteful as it doesn’t contribute to the increase of the market throughput. So in a way, the insight is that you need to block the work being done that is not aligned with the delivery… and for me an immediate significant realisation was that Kanban most of all a system of blocking creation of wasteful work .

One way to do this re-balancing would be to flag the work that involves the bottlenecks in a way that gives it higher priority compared with the rest.

Another way is to lower the granularity of the work being done on the ‘non bottleneck’ processes so that they respond faster to the bottleneck requests.

A point made in the book is that this will increase waste as smaller granularity implies more context switching (“set-ups”) – but as long as this waste still doesn’t transform the process in a “bottleneck process” it’s ok.

What I thought

Well, it does make clear (through all sorts of examples, stories, etc.) how important it is to align all your actions/outputs with the business goal… but I kind of hate this type of story with the guy who sacrifices his family and personal time to fight agains a (broken) system imposed by “the corporation”.

Trivia

“garbage in, garbage out” : appears in this book which is from 1984 and about factory parts — after some research (wikipedia) it seems this expression (just like “having a bug”) is another gift that software engineering gave tot he world!