How should you model downside risk without killing approval chances?

How should you model downside risk without killing approval chances?

8

min read

Chris Goodwin

Guide

Chris Goodwin

8

min read

Guide

There’s a quiet fear behind many business cases that if you model the downside too honestly, the project won’t get approved.  So instead, teams soften assumptions, avoid worst-case scenarios, or bury risks in vague language, with the result being a business case that looks strong on paper, but fragile in reality. 


The irony is that decision-makers don’t expect certainty, as what they actually want is clarity.


In this blog, we’ll explore how a well-structured downside analysis doesn’t weaken a business case; it makes it more credible, more transparent, and easier to approve with confidence. The key is not whether you model downside risk, but how you present and structure it.

Why downside risk often gets avoided

Before getting into the “how”, it’s worth understanding why this problem exists. Most teams avoid downside modeling for three reasons:


🚫 Fear of rejection: if the worst-case scenario looks bad, it might stall approval entirely

🧩 Lack of structure: downside risk is often handled qualitatively, making it feel subjective or exaggerated

🤔 Misunderstanding stakeholder expectations: leaders don’t expect perfection; they expect awareness and control


The result is that people often feel that they face a false trade-off: be honest and risk rejection, or be optimistic and hope for the best. Thankfully, in reality, there’s a third, better option.

Reframe downside risk as decision support, not pessimism

Arguably, the biggest shift is in framing. Downside scenarios should never feel like you’re arguing against your own proposal. Instead, they should show that:


  • You understand what could go wrong

  • You’ve quantified the impact

  • You have a plan to respond


This reframing changes the conversation from “Is this project too risky?” to “What conditions make this project still worth doing?”, and means that you’re not highlighting problems, you’re demonstrating control.

Use structured scenarios, not vague worst cases

Unstructured “worst-case” thinking is what creates fear, so instead, define clear, bounded scenarios:


➡️ Base case: your most realistic, evidence-backed outcome

↘️ Downside case: a plausible underperformance scenario, but crucially, not a disaster

↗️ Upside case: what happens if key assumptions outperform expectations


The critical point is that the downside is not catastrophic; it should reflect credible underperformance, not extreme failure. The reason this is so key is that it helps to keep the analysis grounded and avoids triggering unnecessary alarm.

Anchor downside in specific drivers

A common mistake is presenting downside as a single reduced ROI number. In reality, that’s not particularly helpful, so instead, tie downside scenarios to specific variables, e.g.

  • Adoption rates lower than expected

  • Benefits delayed by 3–6 months

  • Costs higher due to implementation complexity


Decision-makers tend to trust what they can trace, so when downside is clearly linked to drivers, it becomes easier to understand, challenge, and manage.

Show the impact, not just the risk

The point of downside modeling isn’t to just produce a long list of risks; what we actually care about is showing their effect, so you should focus on how key metrics change:



This helps to turn abstract concerns into concrete trade-offs, so, for example, instead of “Adoption may be slower than expected”, it’s infinitely more useful to say “If adoption is 20% lower, payback moves from 18 to 26 months”.

Highlight resilience, not just exposure

This is where most business cases fall short, as most people (wrongly) feel that they should be showing how bad things could get, rather than how well the project holds up. It’s entirely possible to be fully open and honest about the risks, but to use downside analysis to answer:


  • Does the project still deliver value under pressure?

  • Does it still meet minimum thresholds or hurdle rates?

  • How far can assumptions move before the case breaks?


If your downside still clears key thresholds, you’ll have strengthened your argument significantly, and resilient business cases get approved faster.

Pair downside with mitigation actions

Downside without response feels risky, but downside with a plan feels controlled, so for each key downside driver, outline what you would do:


  • If adoption lags → increase enablement or rollout support

  • If costs rise → phase delivery or adjust scope

  • If benefits delay → re-sequence milestones


By doing this, you’re demonstrating that downside isn’t just understood, but it’s going to be actively managed.

Use sensitivity analysis to show what really matters

Not all risks are equal, so sensitivity analysis can help identify which variables actually drive outcomes


Instead of presenting a long list of risks, focus attention on the few that materially impact ROI. This helps keep discussions focused, avoids overwhelming stakeholders with noise, and reinforces that your downside modeling is analytical rather than speculative.

Be transparent, but bounded

Transparency builds trust, but only when it’s paired with structure. Too often, downside analysis is either overly optimistic while ignoring uncertainty or involves vague “worst-case” thinking that introduces unnecessary doubt.


The goal is to show a realistic range of outcomes, without creating ambiguity. 


1️⃣ Define: Start by clearly defining what your downsides represent (which shouldn’t be a catastrophic failure scenario), anchoring it in plausible underperformance, such as:

  • Lower-than-expected adoption

  • Moderate delays in benefit realization

  • Manageable cost increases


This keeps the analysis grounded and relevant to decision-making.



2️⃣ Ease: Next, make your assumptions easy to follow; stakeholders should be able to quickly understand:

  • What changed

  • Why it’s realistic

  • What evidence supports it


What we’re trying to do here is to ensure that clarity removes the perception of subjectivity.



3️⃣ Quantify: Finally, quantify uncertainty wherever possible, so avoid vague language like “this may vary significantly” and instead:

  • Use defined ranges

  • Show specific scenario shifts

  • Keep outcomes tied to measurable changes


We’re not trying to build confidence with blind optimism, rather by providing clarity.


The result of these steps is that the downside view feels controlled, explainable, and useful, rather than open-ended or speculative.

Common mistakes to avoid

Even well-intentioned downside modeling can lose impact if it’s not handled carefully, so watch out for:


📋 Treating downside as a formality: A single “conservative” case with no clear linkage to outcomes doesn’t influence decisions; if it doesn’t change the conversation, it isn’t adding value.


📉 Stacking too many negative assumptions: Combining lower adoption, higher costs, and delayed benefits into one scenario creates an outcome that’s possible, but unlikely. The key is making sure that downside is plausible, not extreme.


🔍 Focusing on risk, not impact: Listing risks without showing how they affect ROI, payback, or NPV keeps the discussion abstract, so remember that decision-makers need to see the consequence, not just the concern.


Ignoring timing effects: Delays in value realization can be just as important as reductions in value, so a project that pays back later may fall outside of acceptable thresholds.


📏 Not linking downside to decision thresholds: Without comparing scenarios to hurdle rates or expectations, stakeholders are left to interpret what “bad” actually means, so ensure that your downsides are relevant by including context.


🔗 Disconnecting risk from delivery: If downside scenarios don’t influence how the project will be managed, they won’t influence approval either, so risk, modeling, and execution should all tell the same coherent story.

Where KangaROI helps (without replacing judgment)

Where KangaROI helps (without replacing judgment)

As business cases become more complex, structured downside modeling becomes harder to manage manually, which is where tools add real value, not by replacing judgment, but by supporting it. For example:


🆚 Scenario modeling allows you to define and compare base, downside, and upside cases without rebuilding your entire model each time


📊 Sensitivity analysis helps identify which variables actually drive outcomes, so discussions stay focused on what matters


📈 Risk-adjusted ROI provides a clearer view of how uncertainty impacts overall value, rather than relying on a single headline number


The key benefit isn’t just speed, it’s consistency and transparency. When downside assumptions, scenarios, and impacts are clearly structured:


  • Stakeholders can explore trade-offs more easily

  • Conversations stay grounded in data

  • Confidence in the model increases


This helps to shift the conversation from defending assumptions to making better decisions.

How this all improves approval chances

How this all improves approval chances

Counterintuitively, strong downside modeling often actually increases approval likelihood. This is because it shows that you’ve pressure-tested the case, understand uncertainty, can manage delivery under real conditions, and that you’re not relying on best-case assumptions.


After all, approval isn’t about presenting the best possible story; it’s about presenting the most credible, believable one.

Conclusion

Downside risk doesn’t kill business cases; unstructured, unclear, or exaggerated downside does. When modeled properly, downside scenarios:


  • Increase credibility

  • Improve decision quality

  • Build stakeholder confidence

  • Strengthen, rather than weaken, approval chances


The goal isn’t to prove that nothing can go wrong; it’s to show that even when things don’t go perfectly, the decision still makes sense, because that’s what decision-makers are really looking for.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Chris Goodwin

Chris Goodwin

Guest Writer

Drawing on a background in Economics and more than 2 decades of experience of building pricing models and pricing teams across the world, Chris brings deep expertise across a diverse range of industries.

Related blogs

Our latest news and articles