When behavioural research meets generative AI: promises, pitfalls and practical guidance

Date
Category
NCRM news
Author(s)
Dr Guanyu Yang (University College London), Dr Amy Rodger (University of Edinburgh), Dr Janna Hastings (University of Zurich), Professor Robert West (University College London), Professor Susan Michie (University College London)
Two people looking at a computer screen.Two people looking at a computer screen.

Imagine you have two months to complete a systematic review on an unfamiliar topic. Faced with the possibility of identifying more relevant papers than you could review in this timeframe, you consider whether generative artificial intelligence (AI) tools like Elicit, Claude or Perplexity might streamline your workflow by searching, screening or extracting data from papers for you. This scenario reflects a broader question facing the behavioural research community: can we harness generative AI’s potential benefits while maintaining scientific rigour and research integrity?

This question matters, as AI could potentially be applied across the entire research lifecycle in behavioural research. These are just some of the proposed uses of AI that often come with the promise of more efficient and better-quality research:

  • Supporting research design through hypothesis generation and identifying confounding variables
  • Facilitating evidence synthesis via literature searches, screening, and data extraction
  • Informing intervention development by suggesting behaviour change techniques or adapting interventions to specific populations
  • Enabling data collection and analysis through survey design, code generation, and qualitative coding; and assisting with academic writing, from manuscript structuring to grant applications.

Additionally, we work with sensitive human data, trying to shape people’s behaviour, and face real risks of perpetuating biases in already underrepresented populations. The stakes are not just about efficiency; they are also about research quality and public trust. In this blog post, we highlight some issues to consider when using generative AI and outline Behavioural Research UK’s (BR-UK) current initiatives to help researchers navigate this complex landscape.

In this post, we’re primarily focusing on generative AI tools: the latest generation of large language models (LLMs) such as ChatGPT, Claude, DeepSeek and Gemini, as well as specialised generative AI-based research tools such as Elicit, Perplexity, and Consensus.

 

Current evidence and guidance

Let’s revisit integrating AI into your systematic review, for example. A recent UK government study compared an AI-assisted and human-only rapid evidence review. The AI-assisted approach completed the analysis and synthesis in 56 per cent less time, with final outputs of similar quality. But here’s the challenge: the same study found that the initial AI draft was judged to require more revisions than the human version, and it included occasional factually incorrect statements (hallucinations).

Other evaluations of using AI tools to automate tasks within evidence synthesis show a similar story, with benefits – such as finding papers not identified through traditional searches – and challenges – such as inconsistent findings over repeated prompts. More broadly, evaluations examining the integration of AI into other aspects of the research process reveal that researchers must be aware of both the potential benefits and pitfalls.

So, what should you do? Never use AI? Use it for everything? At BR-UK, we are advocating that the behavioural research community must decide for themselves when, where and how AI can best improve research practice. To do this, researchers must understand how the AI model or tool they are using works, so they can identify limitations and develop strategies to overcome them, and robustly evaluate each use case’s impact on the research process to build expertise in when and how to use AI effectively.

Researchers should not have to do this in isolation, as guidance for different AI use cases should be developed. For example, guidance like RAISE should be one of the first ports of call for integrating AI into evidence synthesis. Developed by evidence synthesis experts, RAISE emphasises that human oversight remains vital. Researchers still require in-depth expertise to validate outputs, refine strategies and make informed judgments throughout.

The problem is that such guidelines for other stages of research are lacking. We need more guidance on when, where and how to use AI responsibly across the research process, beyond high-level principles. For instance, the UK Research Integrity Office advises researchers to ask the following questions before implementing AI:

  1. What are the tangible benefits?
  2. What is the potential impact and are there ethical concerns?
  3. Is AI the only way to achieve the desired outcome?  

These questions are useful high-level guides, but do not provide detailed guidance for specific use cases in research. As with all research methods, we need thoughtful, evidence-based approaches, not hype-driven adoption.

 

Practical resources for researchers

At BR-UK, we are facilitating behavioural researchers to use AI effectively and responsibly. We've curated practical resources, including seminars, a repository of tools and resources, and a statement in preparation, while learning together as a community.

 

Webinars

We have had three webinars covering the basics: what AI is, how it’s being used in analytics and what responsible use looks like. Recordings are available for anyone getting started (Using Artificial Intelligence to Improve Behavioural Research, Using Analytic AI to Improve Behavioural Research and Responsible Use of AI in Behavioural Research).

 

Living resource repository

Our resource repository features three core sections:

Section 1: Living Guide of AI in the Behavioural Research Process – tools and examples throughout the research cycle, from literature searches to reporting, with real-world use cases.

Section 2: Ethics, Sustainability, Responsible AI Use – critical issues including disclosure, privacy, biases, sustainability, and key regulations, with links to review papers and guidelines.

Section 3: General Learning Resources – recommended open-source resources for beginners and experts, including courses, videos, and webinars.

This is a living resource, shaped by the community. Find out how to get involved.

 

AI help desk

We have provided three bi-monthly advice sessions offering informal space to AI users (who have watched our webinar series) for questions and practical help from experienced researchers.

 

BR-UK’s position statement on AI in behavioural research

We’re developing a formal statement with recommendations for effective and responsible AI use. A steering group from academia and research users shaped the draft, refined with BR-UK colleague feedback. Soon, we’ll consult with the wider behavioural research community for feedback. It’s a living document that evolves with the field.

An upfront view of some key takeaways from the statement:

  • Human-in-the-loop approach: generative AI as a tool, not a replacement
  • Some core principles: transparency, accountability, risk mitigation
  • Bottom line: researchers remain fully responsible for outputs

We’re all navigating this rapidly evolving landscape. Rather than avoiding AI or embracing it uncritically, we would like behavioural researchers to engage with the technology and its evidence for effective and responsible use.

Back to that mountain of papers in your literature review; AI might help you climb it faster, but you’ll still need your expertise to validate the route and spot the hazards.

Join BR-UK in learning what responsible AI use looks like in practice. Explore our resources and, when it goes live, read our AI statement. Your questions and feedback shape how our field moves forward.

 

About BR-UK

Behavioural Research UK (BR-UK) is a research consortium funded by UK Research and Innovation (UKRI) via the Economic and Social Research Council (ESRC). The consortium serves as a leadership hub that is part of a wider ESRC programme to build national capability for behavioural research. BR-UK has been awarded funding over five years from November 2023 to conduct interdisciplinary behavioural research to contribute to addressing societal challenges. To find out more, visit our website or follow us on LinkedIn.