Pharmaceutical Market Europe • March 2025 • 15

HEALTHCARE

NEIL FLASH

IS CRITICAL THINKING AT RISK?

Image

While AI tools offer undeniable benefits, they also risk diminishing our engagement in deep, reflective thinking and learning processes

Image

We are all strategic communicators as professionals working across the pharmaceutical and biotech sectors, whether in communications, public affairs, access, regulatory, marketing or medical functions.

Through standalone platforms or seamlessly incorporated technology, there is no doubt that AI has, is and will continue to help us all with workflow efficiency and scalability. However, the nuanced nature of communicating within the health environment has always demanded more than just algorithm-driven solutions.

The exponential incorporation of artificial intelligence (AI) into our daily workflow raises profound questions about its impact on critical thinking, strategic decision-making and collaborative problem-solving.

Recent research has brought increasing attention to the relatively new concepts of ‘cognitive offloading’ – where complex thinking is deferred to AI systems – and ‘metacognitive laziness’ – where motivation for learning becomes diminished.  These concepts highlight crucial implications for our industry.

The risk of AI-induced cognitive dependencies

Studies published over the last year or so have also revealed interesting patterns in this concept, most strikingly a significant negative correlation between frequent AI tool usage and critical thinking abilities. It has also been observed that while AI tools offer undeniable benefits, they also risk diminishing our engagement in deep, reflective thinking and learning processes.

As the technology itself and accompanying regulations supporting the use of AI in strategic communications continue to evolve, maintaining human oversight and our own thinking capabilities becomes even more crucial for ensuring relevance, accuracy and trust.

The importance of human-to-human engagement

Health communications is, and never will be, a linear process of information exchange; it is a dynamic, iterative dialogue between multiple stakeholders. Academics, healthcare professionals, patients and consumers, regulators and policymakers all contribute crucial and unique perspectives, often rooted in different insights, experiences and expectations. When difficult issues arise – whether tackling dis/misinformation, managing health crises, addressing health inequities, navigating access challenges, communicating complex data or making treatment decisions – relying solely on AI-driven outputs risks oversimplifying challenges that require nuanced deliberation and consensus-building.

While AI excels at data processing and pattern recognition, it cannot replicate the depth of human empathy, contextual understanding or ability to navigate ethical dilemmas. Successful communication in the health arena relies on the ability to engage with stakeholders directly, fostering trust through open dialogue and debate.

Hybrid intelligence: finding the right balance

The concept of hybrid intelligence offers a framework for maintaining critical thinking while leveraging AI’s capabilities. Rather than viewing AI as either a threat or a solution, we should consider it a complementary partner in our arsenal. This means using AI for its strengths – rapid data analysis, pattern recognition and initial content generation – while preserving human judgment for strategic decisions, ethical considerations and stakeholder engagement.

Building critical thinking in an AI-augmented landscape

While AI might help us respond quickly to emerging situations by providing a useful frame or points of reference, our value lies in understanding context, anticipating stakeholder needs and crafting nuanced messages that resonate across different audiences and cultures. This is particularly crucial in health communications, where messages must often bridge the gap between scientific complexity and public or professional understanding.

To mitigate cognitive offloading while leveraging AI’s benefits, we should actively foster environments that encourage deep, reflective thinking and collaborative engagement and, most importantly, maintain cognitive capabilities by:

  1. Encouraging cross-disciplinary collaboration: engage experts from diverse and related fields to examine complex issues from multiple perspectives rather than through an AI-generated lens of past patterns
  2. Promoting active stakeholder dialogue: prioritise direct interactions with stakeholders to gain real-world perspectives and refine messaging accordingly
  3. Enhancing AI literacy: train how to utilise AI-driven platforms and critically assess AI-generated content by questioning assumptions, verifying sources, identifying potential biases and spotting gaps
  4. Applying critical thinking frameworks: use structured techniques such as stakeholder impact mapping to anticipate diverse perspectives, root cause analysis for communication challenges and decision trees for complex messaging strategies. These frameworks ensure rigorous analysis and help navigate the balance between rapid response capabilities (where AI excels) and thoughtful, strategic communications that require deep contextual understanding
  5. Balancing efficiency with ethics: while AI may optimise workflows, ethical oversight remains a human responsibility.

In conclusion

The accelerated integration of AI in our work is inevitable, but its role must be carefully calibrated to avoid an unchecked AI reliance that might lead to ‘cognitive complacency’ and ‘metacognitive laziness’. By fostering critical thinking, insight-driven engagement and interdisciplinary dialogue, we can harness AI’s potential while safeguarding the strategic thinking and essential human elements that drive impactful and responsible health communications.


Neil Flash is owner of Ignition Consulting and Co-Chair of the Communiqué Awards

0