BMC Medical Informatics and Decision Making (May 2021)

Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining

  • Emmanuel Bresso,
  • Pierre Monnin,
  • Cédric Bousquet,
  • François-Elie Calvier,
  • Ndeye-Coumba Ndiaye,
  • Nadine Petitpain,
  • Malika Smaïl-Tabbone,
  • Adrien Coulet

DOI
https://doi.org/10.1186/s12911-021-01518-6
Journal volume & issue
Vol. 21, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Background Adverse drug reactions (ADRs) are statistically characterized within randomized clinical trials and postmarketing pharmacovigilance, but their molecular mechanism remains unknown in most cases. This is true even for hepatic or skin toxicities, which are classically monitored during drug design. Aside from clinical trials, many elements of knowledge about drug ingredients are available in open-access knowledge graphs, such as their properties, interactions, or involvements in pathways. In addition, drug classifications that label drugs as either causative or not for several ADRs, have been established. Methods We propose in this paper to mine knowledge graphs for identifying biomolecular features that may enable automatically reproducing expert classifications that distinguish drugs causative or not for a given type of ADR. In an Explainable AI perspective, we explore simple classification techniques such as Decision Trees and Classification Rules because they provide human-readable models, which explain the classification itself, but may also provide elements of explanation for molecular mechanisms behind ADRs. In summary, (1) we mine a knowledge graph for features; (2) we train classifiers at distinguishing, on the basis of extracted features, drugs associated or not with two commonly monitored ADRs: drug-induced liver injuries (DILI) and severe cutaneous adverse reactions (SCAR); (3) we isolate features that are both efficient in reproducing expert classifications and interpretable by experts (i.e., Gene Ontology terms, drug targets, or pathway names); and (4) we manually evaluate in a mini-study how they may be explanatory. Results Extracted features reproduce with a good fidelity classifications of drugs causative or not for DILI and SCAR (Accuracy = 0 .74 and 0 .81, respectively). Experts fully agreed that 7 3% and 3 8% of the most discriminative features are possibly explanatory for DILI and SCAR, respectively; and partially agreed (2/3) for 9 0% and 7 7% of them. Conclusion Knowledge graphs provide sufficiently diverse features to enable simple and explainable models to distinguish between drugs that are causative or not for ADRs. In addition to explaining classifications, most discriminative features appear to be good candidates for investigating ADR mechanisms further.

Keywords