Pharmaceutical Market Europe • March 2026 • 15
HEALTHCARE
When we can show movement, impact stops being a slogan and becomes a direction
Ever since I can remember, we’ve presented dashboards bursting with activity metrics and trust that they translate into evidence of meaningful change.
But activity is not progress, and progress is not impact.
Confusing the three is more than a measurement problem. It’s symptomatic of how difficult it is to measure incremental progress towards a true goal, and illustrative of other factors: incentives; reporting requirements and the natural anxiety around attribution. The workforce churn now common across both the in-house and agency worlds doesn’t help, disrupting the continuity required for long-term measurement.
It’s also about the questions we fail to ask: does this proxy predict the outcome it stands in for?
If we want to be credible about improving health outcomes – and I believe most of us genuinely do – we need to stop measuring motion and start proving movement.
Here’s the pattern. We select metrics that are visible, attributable and available by the end of each quarter. We optimise programmes to grow those numbers. Over time, the metric becomes the mission, and we end up with sophisticated reporting on outputs that do not reliably predict real-world change.
Call it the proxy trap: the moment a proxy becomes a target, it stops being a useful signal.
This is not an argument against convening stakeholders, tracking sentiment or expressing willingness to change behaviour. Those can be necessary. The issue is what happens next. Too many programmes stop there, or even before that, at attendance or satisfaction, and never test whether any shift occurred that could plausibly lead to better engagement, better clinical decisions or better outcomes for those in our care.
To escape the trap, we need to be precise about what we’re measuring.
Activity is what we do today: posts; booth assets; webinars and emails. Activity metrics tell us the machine ran.
Progress is what shifts over weeks and months: gaps perceived; scientific evidence understood; confidence built and behaviours changed. Progress is often measurable if we design for it.
Impact is what changes over longer periods, even years: earlier diagnosis; swifter access to appropriate treatment; updated clinical guidelines; fewer hospital admissions and better quality of life. It is what we ultimately care about, but it is rarely attributable to a single initiative.
Activity without progress is a well-run programme report. Progress without an impact pathway is busywork. Impact without progress measures is mostly storytelling.
Not every programme can measure outcomes directly, nor should it try. In complex health systems, surrogate measures are often necessary. The issue is whether we choose surrogates that are plausibly predictive or merely convenient.
A simple test: would we expect this metric to change if the real-world decision changed, and would we expect it to change before patient outcomes shift? If so, it can be a credible indicator of progress. If not, it’s probably an activity count in disguise.
Which raises a practical question: where does the evidence of progress come from?
One objection to progress measurement is that the data doesn’t exist or sits outside our walls. That’s increasingly untrue.
Most tactical vehicles now generate richer data ecosystems than we routinely exploit. Technology-enabled insight-capture platforms that aggregate multiple sources can also help us identify patterns indicating movement: rising confidence; shifting language and clustering opinion around specific evidence. The challenge is less about data availability and more about asking progress-oriented questions before we deploy, so we know what signals to look for.
We don’t need new dashboards. We need measurement spines that link contribution to consequence.
Pick one stage beyond where we currently measure. If we’re counting attendees, measure confidence or decision quality. If we’re measuring confidence, look for behaviour in sample settings. If we can see behaviour change in pockets, work towards a larger system change that makes it stick.
Then match cadence to the speed of change. Short-term leading indicators, such as comprehension, confidence and decision quality, tell us whether the programme is plausible. Biannual behavioural signals tell us whether it’s translating. Longer-term markers, such as pathway updates, reimbursement shifts and guideline changes, indicate whether it’s sticking.
As an industry, we will keep producing dashboards full of impressive numbers until we collectively decide that looking busy is no longer acceptable evidence of value.
How about we pick one programme? Map it to the progress chain. Choose one indicator beyond activity that signals real movement. Track it consistently for a year, even if imperfect. Then answer the question stakeholders care about: what changed?
Because when we can show movement, impact stops being a slogan and becomes a direction.
Neil Flash is owner of Ignition Consulting and Co-Chair of the Communiqué Awards