PLOS Digital Health (May 2024)

Addressing 6 challenges in generative AI for digital health: A scoping review.

  • Tara Templin,
  • Monika W Perez,
  • Sean Sylvia,
  • Jeff Leek,
  • Nasa Sinnott-Armstrong

DOI
https://doi.org/10.1371/journal.pdig.0000503
Journal volume & issue
Vol. 3, no. 5
p. e0000503

Abstract

Read online

Generative artificial intelligence (AI) can exhibit biases, compromise data privacy, misinterpret prompts that are adversarial attacks, and produce hallucinations. Despite the potential of generative AI for many applications in digital health, practitioners must understand these tools and their limitations. This scoping review pays particular attention to the challenges with generative AI technologies in medical settings and surveys potential solutions. Using PubMed, we identified a total of 120 articles published by March 2024, which reference and evaluate generative AI in medicine, from which we synthesized themes and suggestions for future work. After first discussing general background on generative AI, we focus on collecting and presenting 6 challenges key for digital health practitioners and specific measures that can be taken to mitigate these challenges. Overall, bias, privacy, hallucination, and regulatory compliance were frequently considered, while other concerns around generative AI, such as overreliance on text models, adversarial misprompting, and jailbreaking, are not commonly evaluated in the current literature.