Pharmaceutical Market Europe • October 2025 • 32-33
AI AND SCIENTIFIC RESEARCH
The new wave of GenAI-powered search tools now offers researchers an alternative method for accessing insights at speed
By Cameron Ross
The emergence of generative AI (GenAI) tools promises a new era of scientific searches, helping researchers grapple with an ever-growing and overwhelming volume of information in order to quickly access accurate, reliable scientific insights. One of the most common challenges I hear about from the companies I work with is that many research teams are forced to spend a considerable amount of their time searching for data, leaving scientists with less space to devote to innovation and ideation. In fact, some studies find that researchers spend 25%-35% of their time manually searching literature and papers for insights.
The new wave of GenAI-powered search tools now offers researchers an alternative method for accessing insights at speed. I’ve found that researchers are naturally curious and highly driven to uncover novel insights and the scientists I talk to are aware of how GenAI will augment their work. Indeed, research has found that an overwhelming majority (94%) of corporate R&D professionals believe that AI will accelerate knowledge discovery.
Against this backdrop, organisations are prioritising investment in GenAI tools in a bid to unleash the promised benefits. Yet R&D budgets are finite and organisations face an overwhelming number of options as new tools continue to enter the market. GenAI only delivers value when tools are vetted before integration, otherwise the hype and hope quickly turns to disillusionment. For GenAI to deliver meaningful scientific outcomes, any tool under consideration should meet these five criteria.
In scientific research, the logic and context behind a query are significant. GenAI research tools must be able to interpret both of these aspects to produce relevant outputs. Yet context and language are complex in the science domain. Conditions and drug names can be recorded in many forms, depending on author, company jargon or regional colloquialisms. For example, a researcher may search for ‘stomach ache’, but the same condition could appear in literature under the synonym ‘abdominal pain’ or specific conditions, like ‘gastroenteritis’ or ‘irritable bowel syndrome’ – which could also be known by the acronym ‘IBS.’
Ensuring all of these variations are captured and that no literature is missed requires a GenAI tool that can interpret natural language, recognise scientific synonyms and link related terms across data sets. This capability will empower researchers to gain more accurate, comprehensive search results more quickly, regardless of their linguistic preferences.
General purpose tools such as ChatGPT have created an expectation that AI should have a conversational interface. But publicly available tools lack the domain specificity that scientific research requires. A GenAI research tool must combine a conversational interface with access to verified, full-text scientific sources. This ensures search results are drawn from complete papers, not just abstracts, to surface both relevant and accurate insights to researchers.
With access to full papers, researchers can then converse with their GenAI tool. For example, the most advanced research AI models will be able to propose suggested questions to guide scientists’ follow-up queries in a conversational way. In practice, this means that when a researcher asks, ‘What are the causes of stomach ache?’, their search tool can provide alternative queries, such as ‘How do medications like NSAIDs contribute to stomach pain?’ or ‘What are the warning signs that a stomach ache could be a symptom of a serious condition?’ These questions can be coupled with links to relevant research papers to provide scientists with angles they might otherwise not have considered. This ability will turn GenAI from a simple search engine into a research assistant, with the tool aiding ideation and uncovering novel angles in the saturated data ecosystem of R&D.
Building trust in AI tools continues to be a priority. Previous research found 71% of researchers and academics expect GenAI tools’ results to be based solely on high quality trusted sources. However, publicly-available AI tools often have unclear data provenances. This makes them unfit for scientific use cases, as researchers must be able to ascertain that only high-quality data sources were used to generate answers.
The architecture of GenAI tools plays a central role here. Techniques such as retrieval-augmented generation (RAG) place clear parameters around the sources considered by a GenAI tool to ensure its search scope encompasses only relevant documents.
RAG provides a route to improving the accuracy of models, minimising the risk of hallucinations and reinforcing the trustworthiness of AI in R&D.
The ‘black box’ effect arises when AI tools produce outputs without revealing the steps or data used to reach conclusions. Opaqueness further increases if the tool does not retain a history of the queries it is asked. Altogether, this lack of clarity severely impedes trust in AI search results and impacts compliance.
Researchers need visibility into how their AI tools reach their conclusions. This requires transparent citing of every paper, with direct links or quotation snippets from original source documents. Such features create a ‘paper trail’, allowing researchers to verify findings.
‘An overwhelming majority (94%) of corporate R&D professionals believe that AI will accelerate knowledge discovery’
The ability to compare the end-to-end process of an experiment – not just its results – is vital to R&D. However, doing this manually and comprehensively is time intensive. GenAI research tools built on full-text sources can accelerate experiment comparison by extracting and synthesising an experiment’s methods, goals and conclusions into a unified view in seconds.
This capability means scientists can more efficiently and effectively review evidence. Some researchers say that AI allows them to read up to ten papers each week, instead of two or three. They mentioned that AI has enhanced both the quality and depth of their research, freeing them to focus on bench work.
As agentic AI capabilities develop, AI tools are becoming increasingly capable of pulling out data in non-textual formats. This includes tables for easier experiment comparison, further accelerating researchers’ workflows, without compromising on the information provided to them.
Our experience so far indicates that GenAI will be transformative for researchers by reducing the time spent on literature searches and freeing scientists to dedicate more energy to discovery and bench work. However, in science and many other industries, accuracy is everything, so we must proceed with caution.
For AI transformation to take place, trust and data security are fundamental. In a sensitive context such as drug discovery, AI tools must be trained on trusted, curated, domain-specific data sets. Organisations must also take care to use GenAI platforms with clear guardrails in place to ensure data privacy, user confidentiality and IP protection. Most importantly, AI tools must be built by scientists for science. Off-the-shelf, publicly-available GenAI tools do not pass muster.
In my view, the only way for GenAI to deliver on its promise is for organisations to combine trusted content with responsible AI practices. That is how they will earn the confidence required to unlock true scientific innovation.
Cameron Ross is SVP, Generative AI at Elsevier