Considering patient safety in autonomous e-mental health systems – detecting risk situations and referring patients back to human care

BMC Medical Informatics and Decision Making. 2019;19(1):1-16 DOI 10.1186/s12911-019-0796-x

 

Journal Homepage

Journal Title: BMC Medical Informatics and Decision Making

ISSN: 1472-6947 (Online)

Publisher: BMC

LCC Subject Category: Medicine: Medicine (General): Computer applications to medicine. Medical informatics

Country of publisher: United Kingdom

Language of fulltext: English

Full-text formats available: PDF, HTML

 

AUTHORS

Myrthe L. Tielman (Department of Interactive Intelligence, Delft University of Technology)
Mark A. Neerincx (Department of Interactive Intelligence, Delft University of Technology)
Claudia Pagliari (Edinburgh University)
Albert Rizzo (USC Institute of Creative Technologies)
Willem-Paul Brinkman (Department of Interactive Intelligence, Delft University of Technology)

EDITORIAL INFORMATION

Open peer review

Editorial Board

Instructions for authors

Time From Submission to Publication: 23 weeks

 

Abstract | Full Text

Abstract Background Digital health interventions can fill gaps in mental healthcare provision. However, autonomous e-mental health (AEMH) systems also present challenges for effective risk management. To balance autonomy and safety, AEMH systems need to detect risk situations and act on these appropriately. One option is sending automatic alerts to carers, but such ‘auto-referral’ could lead to missed cases or false alerts. Requiring users to actively self-refer offers an alternative, but this can also be risky as it relies on their motivation to do so. This study set out with two objectives. Firstly, to develop guidelines for risk detection and auto-referral systems. Secondly, to understand how persuasive techniques, mediated by a virtual agent, can facilitate self-referral. Methods In a formative phase, interviews with experts, alongside a literature review, were used to develop a risk detection protocol. Two referral protocols were developed – one involving auto-referral, the other motivating users to self-refer. This latter was tested via crowd-sourcing (n = 160). Participants were asked to imagine they had sleeping problems with differing severity and user stance on seeking help. They then chatted with a virtual agent, who either directly facilitated referral, tried to persuade the user, or accepted that they did not want help. After the conversation, participants rated their intention to self-refer, to chat with the agent again, and their feeling of being heard by the agent. Results Whether the virtual agent facilitated, persuaded or accepted, influenced all of these measures. Users who were initially negative or doubtful about self-referral could be persuaded. For users who were initially positive about seeking human care, this persuasion did not affect their intentions, indicating that a simply facilitating referral without persuasion was sufficient. Conclusion This paper presents a protocol that elucidates the steps and decisions involved in risk detection, something that is relevant for all types of AEMH systems. In the case of self-referral, our study shows that a virtual agent can increase users’ intention to self-refer. Moreover, the strategy of the agent influenced the intentions of the user afterwards. This highlights the importance of a personalised approach to promote the user’s access to appropriate care.