Learning Temporally Latent Causal Processes from General Temporal Data

Our goal is to find time-delayed latent causal variables and identify their relations from temporal measured variables. Estimating latent causal variable graphs from observations is particularly challenging as the latent variables are not uniquely recoverable in the most general case. In this work, we consider both a nonparametric, nonstationary setting and a parametetric setting for the latent processes and propose two provable conditions under which temporally causal latent processes can be identified. We propose LEAP, a theoretically-grounded architecture that extends Variational Autoencoders (VAEs) by enforcing our conditions through proper constraints in causal process prior. We evaluate LEAP on a number of datasets including video and motion capture data. Experiments demonstrate that temporally causal latent processes are reliably identified from observed variables under different dependency structure and our approach considerably outperforms those existing methods without leveraging history or nonstationarity information. This is one of the first works that successfully recover time-delayed latent processes from nonlinear mixtures without using sparsity or minimality assumptions.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.