Scientific Reports (Jan 2021)

Verifying explainability of a deep learning tissue classifier trained on RNA-seq data

  • Melvyn Yap,
  • Rebecca L. Johnston,
  • Helena Foley,
  • Samual MacDonald,
  • Olga Kondrashova,
  • Khoa A. Tran,
  • Katia Nones,
  • Lambros T. Koufariotis,
  • Cameron Bean,
  • John V. Pearson,
  • Maciej Trzaskowski,
  • Nicola Waddell

DOI
https://doi.org/10.1038/s41598-021-81773-9
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 12

Abstract

Read online

Abstract For complex machine learning (ML) algorithms to gain widespread acceptance in decision making, we must be able to identify the features driving the predictions. Explainability models allow transparency of ML algorithms, however their reliability within high-dimensional data is unclear. To test the reliability of the explainability model SHapley Additive exPlanations (SHAP), we developed a convolutional neural network to predict tissue classification from Genotype-Tissue Expression (GTEx) RNA-seq data representing 16,651 samples from 47 tissues. Our classifier achieved an average F1 score of 96.1% on held-out GTEx samples. Using SHAP values, we identified the 2423 most discriminatory genes, of which 98.6% were also identified by differential expression analysis across all tissues. The SHAP genes reflected expected biological processes involved in tissue differentiation and function. Moreover, SHAP genes clustered tissue types with superior performance when compared to all genes, genes detected by differential expression analysis, or random genes. We demonstrate the utility and reliability of SHAP to explain a deep learning model and highlight the strengths of applying ML to transcriptome data.