Entropy (Jun 2024)

Diffusion-Based Causal Representation Learning

  • Amir Mohammad Karimi Mamaghan,
  • Andrea Dittadi,
  • Stefan Bauer,
  • Karl Henrik Johansson,
  • Francesco Quinzan

DOI
https://doi.org/10.3390/e26070556
Journal volume & issue
Vol. 26, no. 7
p. 556

Abstract

Read online

Causal reasoning can be considered a cornerstone of intelligent systems. Having access to an underlying causal graph comes with the promise of cause–effect estimation and the identification of efficient and safe interventions. However, learning causal representations remains a major challenge, due to the complexity of many real-world systems. Previous works on causal representation learning have mostly focused on Variational Auto-Encoders (VAEs). These methods only provide representations from a point estimate, and they are less effective at handling high dimensions. To overcome these problems, we propose a Diffusion-based Causal Representation Learning (DCRL) framework which uses diffusion-based representations for causal discovery in the latent space. DCRL provides access to both single-dimensional and infinite-dimensional latent codes, which encode different levels of information. In a first proof of principle, we investigate the use of DCRL for causal representation learning in a weakly supervised setting. We further demonstrate experimentally that this approach performs comparably well in identifying the latent causal structure and causal variables.

Keywords