Information Technology and Management Science (Nov 2023)

Towards Explainability of the Latent Space by Disentangled Representation Learning

  • Ivars Namatēvs,
  • Artūrs Ņikuļins,
  • Anda Slaidiņa,
  • Laura Neimane,
  • Oskars Radziņš,
  • Kaspars Sudars

DOI
https://doi.org/10.7250/itms-2023-0006
Journal volume & issue
Vol. 26, no. 1
pp. 41 – 48

Abstract

Read online

Deep neural networks are widely used in computer vision for image classification, segmentation and generation. They are also often criticised as “black boxes” because their decision-making process is often not interpretable by humans. However, learning explainable representations that explicitly disentangle the underlying mechanisms that structure observational data is still a challenge. To further explore the latent space and achieve generic processing, we propose a pipeline for discovering the explainable directions in the latent space of generative models. Since the latent space contains semantically meaningful directions and can be explained, we propose a pipeline to fully resolve the representation of the latent space. It consists of a Dirichlet encoder, conditional deterministic diffusion, a group-swap and a latent traversal module. We believe that this study provides an insight into the advancement of research explaining the disentanglement of neural networks in the community.

Keywords