Nature Communications (Aug 2024)

Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization

  • Oded Rotem,
  • Tamar Schwartz,
  • Ron Maor,
  • Yishay Tauber,
  • Maya Tsarfati Shapiro,
  • Marcos Meseguer,
  • Daniella Gilboa,
  • Daniel S. Seidman,
  • Assaf Zaritsky

DOI
https://doi.org/10.1038/s41467-024-51136-9
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 19

Abstract

Read online

Abstract The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a “black box” lacking human meaningful explanations for the models’ decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables “human-in-the-loop” interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of “black box” classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.