Probability vs impact: How to score risk properly

Feb 10, 2026

Feb 10, 2026

7

min read

Guide
Guide

Chris Goodwin

Guide
Guide

Most risk registers look impressively detailed, yet are almost completely useless for decision‑making. You find rows of risks, with scores to two decimal places, and pretty red, amber, green heatmaps. Yet when approval time comes around, the same questions still hang in the air:


  • Which risks actually matter?

  • What should we do about them?


  • How much do they really change the value of this investment?


The root problem is rarely the list of risks; it’s how their probability and impact are being scored. This guide cuts through the false precision and shows how to make probability and impact meaningful inputs into decisions, not compliance checkboxes.

Why probability and impact are so often misunderstood

On paper, the idea is simple; probability measures how likely a risk is to occur, while impact measures how bad it would be if it did. In practice, though, both are commonly distorted because:


Probability becomes guesswork
Human nature means that teams tend to pick numbers that feel safe or defensible, not realistic. Rare risks get inflated because they are scary, while common risks get downplayed because they feel familiar.


Impact becomes abstract
Instead of tying impact to real outcomes like increased costs, a delay to the project, or an erosion of the benefits, teams use vague labels like “High” or “Medium” without shared meaning.


Scores imply certainty that does not exist
A risk scored at 0.37 probability and £412,000 impact looks scientific, but usually rests on little more than opinion.


The result is a risk register that looks rigorous while quietly avoiding the hard conversations.

Start with decisions, not scores

Before you touch a matrix or a number, it’s important to be clear about one key thing:


What decision is this risk meant to inform?

  • Is it about whether the investment should proceed at all?

  • Whether additional funding is justified?

  • Whether mitigations are worth the cost?


If probability and impact are not connected to a decision, they will often just drift into theatre. Good scoring exists to answer a simple question: how much does this risk change the expected value of the business case?

Probability: start wide, then narrow as certainty increases

Early in a business case, probability is inherently fuzzy, and treating early estimates as precise is where most scoring goes wrong. A better approach is to let probability evolve as evidence improves.


🧭 Start with broad, defensible estimates
Using clear 1 - 5 probability bands (from Rare to Almost Certain) creates a shared baseline without forcing false precision too early. The goal at this stage is alignment, not optimisation.


🔍 Anchor scores to what you actually know
Initial probability should reflect observable signals, so that means calling on comparable projects, delivery maturity, vendor track record, and known dependencies. If none of these exist, that uncertainty should be reflected in the score.


🔬 Increase precision as risk is burned down
As milestones are passed, contracts are signed, or controls are implemented, probability can justifiably tighten. This is the point where moving from broad bands to more specific percentages becomes meaningful.


🔄 Expect probability to move
If probability scores look the same at approval, mid‑delivery, and post‑go‑live, then chances are they’re not being used properly. It’s important to realise that change is a signal of learning, not weakness.


The aim is not perfect prediction, instead, it’s to reflect increasing confidence as uncertainty is reduced.

Impact: tie it to value, not severity labels

Impact should answer a brutally practical question:


If this risk materialises, what changes in the business case?


💰 Translate impact into value terms
Impact should connect to cost increases, benefit erosion, revenue delay, or risk‑adjusted ROI. Words alone are not enough.


🚨 Avoid worst‑case fantasies
Impact is not the absolute worst thing that could happen. It is the credible downside if the risk occurs, given reasonable response.


⏱️ Separate one‑off impact from ongoing drag
A one‑time cost overrun is very different from a permanent reduction in benefits, therefore, you should always ensure that you treat them differently.


Words are nice and all, but when impact is expressed in value terms, it becomes comparable, actionable, and impossible to ignore.

The probability‑impact matrix: useful, but limited

The classic 5x5 matrix is popular for a reason, as it simplifies things and helps people to face prioritisation, so used well, it helps teams focus attention, but it’s worth knowing that when used badly, it can hide reality.


🆚 Relative ranking is not value
They may well sit in the same red square, but two risks may differ by millions of dollars in expected impact, depending on what they both specifically actually impact.


🎨 Colour does not equal urgency
A medium probability, very high impact risk could matter either more or less than a high probability, low impact one, as again, specificity matters.


💬 Matrices should drive conversation, not conclude it
The matrix should always be a starting point for discussion, not a final answer.


Something that’s relevant to all walks of life, not just risk analysis, is that the moment a heatmap replaces actual thinking, it has failed.

Making probability and impact decision‑grade

To move from compliance to insight, you need to bring probability and impact together in a way that reflects value, and you can do this by:


🧮 Focusing on expected impact
Expected impact combines probability and impact into a single concept. The aim isn’t to create false precision, but to compare risks on the same basis.


📉 Modeling downside explicitly
Instead of burying risk in narrative, visibly demonstrate how risks impact ROI, NPV, or payback under different scenarios.


🔗 Linking mitigations to movement
A mitigation that doesn’t reduce probability or impact isn’t actually a mitigation. Scores should visibly change when actions are taken.


This is where risk stops being a list and starts being a lever.

Common scoring mistakes to avoid

Even mature teams fall into the same traps:


⏸️ Treating probability and impact as fixed at approval
Early scores are often treated as permanent, even though evidence, controls, and delivery progress should change them over time.


🎭 Scoring risks to justify a preferred outcome
When scores are used to support a decision already made, risk becomes theatre rather than insight.


🧩 Mixing delivery risks with strategic risks
Operational issues and existential threats behave very differently, yet are often scored on the same scale without distinction.


📚 Assuming more detail automatically means better insight
Extra decimals, longer descriptions, or bigger registers rarely improve decisions unless they change what people do next.


If your scoring never changes the decision, then it’s not doing its job.

How KangaROI makes this easier

In tools like KangaROI, probability and impact are designed to feed directly into risk‑adjusted ROI and NPV.


Risks are scored using a clear probability‑impact framework, surfaced through a 5x5 matrix, and translated into expected monetary value. As the project matures and mitigations are applied, the user has the option to move those numbers, and the business case moves with them.


The point is not the framework; it is the outcome, namely, clear trade‑offs, transparent assumptions, and decisions that acknowledge uncertainty instead of pretending it doesn’t exist.

The real test of good risk scoring

A simple question reveals whether probability and impact are being scored properly:


If this risk disappeared tomorrow, would the decision change?


If the answer is no, then the scoring is just noise. Good risk scoring doesn't eliminate uncertainty, rather, it makes uncertainty visible, comparable, and manageable. And that's what turns risk from a compliance exercise into a decision advantage.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Related blogs

Our latest news and articles