Trials (Nov 2021)

Use of external evidence for design and Bayesian analysis of clinical trials: a qualitative study of trialists’ views

  • Gemma L. Clayton,
  • Daisy Elliott,
  • Julian P. T. Higgins,
  • Hayley E. Jones

DOI
https://doi.org/10.1186/s13063-021-05759-8
Journal volume & issue
Vol. 22, no. 1
pp. 1 – 9

Abstract

Read online

Abstract Background Evidence from previous studies is often used relatively informally in the design of clinical trials: for example, a systematic review to indicate whether a gap in the current evidence base justifies a new trial. External evidence can be used more formally in both trial design and analysis, by explicitly incorporating a synthesis of it in a Bayesian framework. However, it is unclear how common this is in practice or the extent to which it is considered controversial. In this qualitative study, we explored attitudes towards, and experiences of, trialists in incorporating synthesised external evidence through the Bayesian design or analysis of a trial. Methods Semi-structured interviews were conducted with 16 trialists: 13 statisticians and three clinicians. Participants were recruited across several universities and trials units in the United Kingdom using snowball and purposeful sampling. Data were analysed using thematic analysis and techniques of constant comparison. Results Trialists used existing evidence in many ways in trial design, for example, to justify a gap in the evidence base and inform parameters in sample size calculations. However, no one in our sample reported using such evidence in a Bayesian framework. Participants tended to equate Bayesian analysis with the incorporation of prior information on the intervention effect and were less aware of the potential to incorporate data on other parameters. When introduced to the concepts, many trialists felt they could be making more use of existing data to inform the design and analysis of a trial in particular scenarios. For example, some felt existing data could be used more formally to inform background adverse event rates, rather than relying on clinical opinion as to whether there are potential safety concerns. However, several barriers to implementing these methods in practice were identified, including concerns about the relevance of external data, acceptability of Bayesian methods, lack of confidence in Bayesian methods and software, and practical issues, such as difficulties accessing relevant data. Conclusions Despite trialists recognising that more formal use of external evidence could be advantageous over current approaches in some areas and useful as sensitivity analyses, there are still barriers to such use in practice.

Keywords