Most business cases look impressive on paper; they model ROI, forecast NPV, estimate payback periods, and define financial targets and expected benefits. Sometimes you’re even lucky enough to find everything neatly tied to an outcome.
Then six or twelve months later, someone asks a simple question:
“Are we on track to realise the value?”
And the uncomfortable truth is that many business cases rely almost entirely on lagging metrics; by the time those metrics move, it's already too late to change course.
In this blog we’ll explain the difference between leading and lagging metrics in business cases, why over-relying on lagging indicators creates hidden risk, and how to design leading signals that tell you whether value is likely to materialise before it’s too late to act.
What are lagging metrics?
Lagging metrics measure outcomes after they have already happened, so they confirm results, and validate success or failure. But crucially, they don’t give you time to intervene.
Common lagging metrics in business cases include:
💰 Revenue generated
📈 ROI achieved
🧾 Cost savings realised
📊 Margin improvement
⏳ Payback period reached
These are important, as if nothing else, they are often the headline numbers that justified the investment in the first place. The problem is timing.
👉 e.g. If your business case predicts $2m in annual cost savings, you only know whether that happened at the end of the period. If adoption was poor, if process change failed, or if assumptions were wrong, the signal arrives after the damage is done. At that point, we’re firmly in “horses and stable doors” territory.
Lagging metrics tell you what happened, but what they don’t do is help you shape what happens next.
What are leading metrics?
Leading metrics are early indicators that signal whether you’re on track to achieve your intended outcomes. They measure behaviours, inputs, adoption, and intermediate effects that must occur before financial value can materialise.
So if your business case assumes revenue growth from a new product feature, leading metrics might include:
👥 Active user adoption rate
🔁 Feature usage frequency
📉 Drop-off rate in onboarding
🧠 Sales enablement completion
📞 Demo-to-close conversion rate
Whereas if your case assumes cost savings from automation, leading metrics could be:
⚙️ % of transactions processed automatically
🕒 Average handling time reduction
📋 Error rate trend
👩💻 Staff training completion
These metrics don’t prove that value has been realised, but they do indicate whether the conditions required for value are forming, which means leading metrics give you leverage.
Why most business cases over-rely on lagging indicators
There are three common reasons:
💵 Financial models dominate the conversation
Business cases are often built around financial outputs, such as ROI, NPV, and IRR, and these are inherently lagging because they measure results. For example, ROI as a metric is retrospective by nature, as it confirms value after it has been created; it doesn’t explain how that value is built.
🤝 Lagging metrics are easier to agree on
Revenue, cost savings, and profit are universal, so they're simple to communicate at board level. Leading metrics often require more thought; they’re usually specific to the initiative and often cross-functional, which forces deeper conversations about what actually drives value.
🙋🏻♀️ Accountability feels clearer with outcomes
Executives like outcome-based targets; “Deliver $3m in savings” sounds decisive. But without leading metrics, that target just becomes a cliff edge, as you only discover failure when the savings don’t materialise, and you’re plummeting over the edge. Which is very much a case of hindsight, rather than control.
The risk of waiting for lagging signals
When you only track lagging indicators:
You identify problems too late to fix them
You miss early warning signs of assumption failure
You struggle to explain underperformance
You reduce the business case to a one-time approval document
A strong business case shouldn’t just secure funding; it should provide a framework for managing value in-flight, and that requires forward-looking signals.
How to design leading metrics in a business case
Designing leading metrics starts with a simple question:
“What must be true for this ROI to materialise?”
then you work backwards from the financial outcome.
1️⃣ Map the value chain
Break the value assumption into stages.
👉 e.g.
Launch feature
Users adopt feature
Usage changes behaviour
Behaviour change drives revenue
Revenue improves ROI
Each stage implies measurable signals, so if adoption stalls at stage two, revenue will never arrive. A leading metric here is therefore adoption rate, not revenue.
2️⃣ Identify behavioural drivers
Financial outcomes are downstream of behaviour, so ask:
What must users do differently?
What must employees do differently?
What processes must change?
then design metrics around those shifts.
👉 e.g. For a digital transformation case, revenue growth might depend on:
% of customers migrating to digital channel
Reduction in manual processing steps
Increase in self-serve transactions
These are leading indicators of efficiency and scalability.
3️⃣ Make metrics actionable
Good leading metrics are:
🧭 Directional; it moves before financial impact
🔄 Frequent; usually measured weekly or monthly
🎯 Controllable; someone can influence it
🔍 Diagnostic; it explains why value may or may not occur
If no team can influence the metric directly, it’s probably still a lagging indicator.
4️⃣ Link leading and lagging explicitly
Don’t treat them as separate worlds; instead, in your business case, show:
Leading metrics
The logic that connects them to financial outcomes
The lagging metrics they’re expected to drive
This creates transparency while also strengthening credibility, because you’re making your assumptions testable.
Leading metrics reduce business case risk
Leading metrics aren’t just operational KPIs, they’re risk controls because when you make the effort to define them upfront:
💡 You surface hidden assumptions
🆚 You can run scenario modelling against adoption rates
🧮 You can stress-test how sensitive ROI is to behaviour changes
🔄 You create early intervention points
Instead of asking “Did we deliver ROI?”, you find yourself asking “Are the drivers of ROI trending in the right direction?”, and that shift changes the quality of governance conversations.
A practical example
Let's imagine a business case forecasting $5m in savings from process automation.
The Lagging metric is therefore:
Annual cost reduction achieved
Leading metrics might include:
% of eligible processes automated
Average cycle time per transaction
Manual rework rate
Staff redeployment rate
If automation adoption stalls at 40 percent, you’ll know within weeks that the savings assumption is at risk, which means that you can intervene, adjust scope, provide additional training, or revisit the financial forecast. Without leading metrics, you’ll only discover the shortfall at either year-end, or in a post-mortem at the end of the project.
One approach protects value, while the other explains failure.
The balance: you need both
As with so many things, but particularly when it comes to metrics and measurements, there is no “this is right, this is wrong”, as if there was, the “wrong” one just wouldn’t be a concern or used by anyone. In reality:
Lagging metrics validate success, so they matter for reporting, incentives, and strategic alignment.
Leading metrics create control, so they matter for delivery, risk mitigation, and course correction.
A robust business case will include:
Clear financial outcomes
Explicit assumptions
Defined leading indicators for each major value driver
A cadence for tracking and acting
When these are connected, the business case stops being a static approval document and becomes a living value management framework.
Final thought
If your business case only measures ROI at the end, you’re effectively flying without instruments.
The smart approach is to design leading metrics that signal whether value is forming; make them visible; review them regularly; and link them directly to your financial model.
The goal isn’t just to prove value after the fact, it’s to create the conditions for value to materialise in the first place.
And that’s the difference between reporting performance and managing it.





