Pharmaceutical Market Europe • January 2024 • 37
HEALTHCARE COMMS
The HCA spoke to senior leaders to gain insight into the challenges, perspectives and opportunities
By By Mike Dixon and Alister Sansum
The Healthcare Communications Association (HCA) first published guidance relating to evaluation in communication activities, at that time predominantly from a public relations perspective, over 20 years ago. While the world and our industry has significantly changed, the importance of this topic remains, as do the challenges and barriers to implementation.
In 2023, the HCA’s Standards and Best Practice Committee has been considering the sector’s use of evaluation in communication activities, aiming to:
Taken together, it is hoped this will help encourage more communicators to incorporate measurement and evaluation into their programmes.
To gain insight into the challenges, perspectives and opportunities, a series of qualitative interviews were undertaken with 14 senior leaders representing a cross-section of healthcare communications disciplines – including advertising, medical education (MedEd), medical affairs (MA) and public relations (PR) – in agency and industry roles. Despite some degree of variance from respondents, often based on their specialisation, there were many consistent insights that were identified from the research. These should inform all practitioners about where we all need to focus efforts to improve evaluation of our communications initiatives, to demonstrate value and further elevate the role of communications within the commercial, medical affairs and charity settings.
There was strong consensus that evaluation was an important component of any communications programme. On a scale of 1 to 10, where 10 represented greatest importance, all responses were above 8. Yet despite this, the consistency of incorporating evaluation criteria in their organisation’s communications activity was much more varied and all significantly less than ubiquitous. This variation also existed within individual organisations – across teams and geographies – suggesting that often no universally implemented strategy for ensuring evaluation existed, or was being followed. For agencies who felt they had a strong philosophy of evaluation, they suggested their clients often did not share the same commitment, meaning evaluation was often the first thing to be scaled down or removed.
“I do find measurement in the same vein as project management. Measurement is the first thing to be scaled down considerably or sometimes asked to be removed.” Research agency interviewee
What was driving such a strong perception of the importance of evaluation? Interviewees recognised that the benefits of good evaluation go far beyond just demonstrating a programme’s success. Identifying and understanding the real-life impact on healthcare professionals (HCPs) and importantly patient health, demonstrates the value, but also delivers the emotional and motivational benefits of achievement and pride. It was felt that evaluation helped provide the evidence to demonstrate the power of communications to all stakeholders, internally and externally. Importantly, there was also the recognition that it is not all about showing success. Evaluation’s power is that it can help us learn more about our audiences and deliver better understanding of the impact different strategies can have, helping identify new opportunities and optimising future initiatives.
So, with a perceived importance and clearly identified benefits, what did those interviewed consider was preventing evaluation in all their communications activity? To answer this, a degree of interpretation of the responses is required, because the specific practical barriers cited varied to some degree by discipline. There was certainly a common thread that time and financial constraints often meant evaluation was either not incorporated into a programme, or was ultimately not implemented.
More in-depth questioning seemed to suggest that there is a disconnect between the view of the importance of evaluation and perception of its value. This is perhaps a paradox that needs to be explored further as, with such belief in the importance and the recognition of the potential benefits, it might be assumed that the value should be unambiguous. However, that does not seem to be the case in practice or across all stakeholders.
There are many practical reasons given for not delivering a comprehensive evaluation, but whether these can be overcome, should the desire be strong enough, is perhaps answered by the many examples of programmes that have been successful in overcoming them. Examples of these blockers that were documented in this research included:
Communication programmes are often part of a mix of activity delivering against an objective, which interviewees felt can make it very difficult to determine the impact of the communications programme in helping achieve that overall desired outcome/behavioural change.
“I think people are scared of [evaluation] because they don’t know how to tackle it. They don’t know how to go about it or what it can deliver.” Research interviewee
Throughout the research, interviewers noted the frequent use of the words ‘evaluation’ and ‘measurable’ being used as if they were interchangeable, ie, that measurement was evaluation and vice versa. This is an interesting insight and perhaps suggests the difference between the two are not always understood or differentiated in practice.
The research also asked participants to consider ways evaluation could be better embedded into everyday practice. Again, several common suggestions emerged which have the potential to act as a roadmap for all communication professionals.
Recognising the disconnect between perception of importance compared to value, many participants felt it was important for the value of evaluation to be better articulated and communicated to all stakeholders.
This includes the need to up-skill all relevant stakeholders across disciplines to be able to encourage and lead the embedding of evaluation in all communications initiatives.
“If measurement is done well and throughout the year, it enables you to iterate, evolve and optimise tactics. It provides insights that will inform what you do in 1 year, 2, 3, or 4 years’ time, meaning everything you’re doing is evidence-based.” Research interviewee
There was consensus amongst those interviewed that evaluation methodology should be built into the communications programme from the outset, tailored to the specific objectives and with a clear understanding of what the desired outcome would be and what will be measured, how and when. With regards to the latter, it was considered important for milestones to be positioned along the project timeline to allow assessment and if necessary course correction.
“Best practice starts with understanding what we should evaluate and how we will measure it. Only upon knowing that can we plan a campaign around it.” Research interviewee
Metrics used to support the evaluation programme need to be both quantitative and qualitative and move beyond just tangential metrics (such as number of attendees or media impressions) to include real measures of impact on an audience’s opinions, perceptions and behaviours.
“…has what we’ve done actually changed the way a healthcare professional thinks about treating a disease or talking about a disease?” Research interviewee
Evaluation should ascertain if the programme has achieved what it set out to, but it is equally as important for it to show what can be learnt. The evaluation conclusions should therefore be openly and honestly shared with all stakeholders.
“[Evaluation] is an opportunity to both simultaneously learn and showcase the work.” Research interviewee
The insights from this research should continue to inform all communications practitioners of their responsibility to embed evaluation in all our work, to further our profession and confirm the importance of impactful communications within the broader portfolio of activity. It also provides some clear direction where focus should be directed to improve our evaluation discipline.
Acknowledgements: The original survey was developed and undertaken by the Evaluation Working Group, part of the HCA’s Standards and Best Practice Committee. The survey analysis, on which this article is based, was undertaken by Lynn Hamilton at Synthesis Health
Mike Dixon is CEO at Healthcare Communications Association (HCA) and Alister Sansum is Director of Scientific and Medical Communications at Publicis Health and Chair of HCA’s Standards and Best Practice Committee