Machine Learning: Science and Technology (Jan 2023)
Learning latent functions for causal discovery
Abstract
Causal discovery from observational data offers unique opportunities in many scientific disciplines: reconstructing causal drivers, testing causal hypotheses, and comparing and evaluating models for optimizing targeted interventions. Recent causal discovery methods focused on estimating the latent space of the data to get around a lack of causal sufficiency or additivity constraints. However, estimating the latent space significantly increases model complexity, compromising causal identifiability and making it hard to compare models that correspond to different causal hypotheses. We propose a kernel, non-parametric latent-space modelling approach and deal with the difficulty of comparing causal directions by measuring and controlling for the level of causal assumption fulfilment. We introduce a latent noise causal inference framework to estimate latent factors associated with the hypothesized causal direction by optimizing a loss function with kernel independence criteria. We extend the framework to work with time series using an additional time-dependent kernel regularizer. We discuss the additivity assumption and model complexity and give empirical evidence of performance in a wide range of synthetic and real causal discovery problems.
Keywords