1 code implementation • 17 Sep 2022 • SiQi Liu, Andreas Lehrmann
Deep learning has shown impressive results in a variety of time series forecasting tasks, where modeling the conditional distribution of the future given the past is the essence.
1 code implementation • 23 Feb 2022 • Chandramouli Shama Sastry, Andreas Lehrmann, Marcus Brubaker, Alexander Radovic
Instead, we build upon the diffeomorphic properties of normalizing flows and leverage the divergence theorem to estimate the CDF over a closed region in target space in terms of the flux across its \emph{boundary}, as induced by the normalizing flow.
no code implementations • ICML Workshop INNF 2021 • Alexander Radovic, JiaWei He, Janahan Ramanan, Marcus A Brubaker, Andreas Lehrmann
In this work we describe OMEN, a neural ODE based normalizing flow for the prediction of marginal distributions at flexible evaluation horizons, and apply it to agent position forecasting.
1 code implementation • NeurIPS 2020 • Ruizhi Deng, Bo Chang, Marcus A. Brubaker, Greg Mori, Andreas Lehrmann
Normalizing flows transform a simple base distribution into a complex target distribution and have proved to be powerful models for data generation and density estimation.
no code implementations • ECCV 2020 • Megha Nawhal, Mengyao Zhai, Andreas Lehrmann, Leonid Sigal, Greg Mori
Human activity videos involve rich, varied interactions between people and objects.
1 code implementation • 18 Jun 2019 • Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, Yaser Sheikh
Modeling and rendering of dynamic scenes is challenging, as natural scenes often contain complex phenomena such as thin structures, evolving topology, translucency, scattering, occlusion, and biological motion.
no code implementations • ICLR 2019 • Jiawei He, Yu Gong, Joseph Marino, Greg Mori, Andreas Lehrmann
In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure.
1 code implementation • ECCV 2018 • Jiawei He, Andreas Lehrmann, Joseph Marino, Greg Mori, Leonid Sigal
Videos express highly structured spatio-temporal patterns of visual data.
no code implementations • NeurIPS 2017 • Andreas Lehrmann, Leonid Sigal
End-to-end training methods for models with structured graphical dependencies on top of neural predictions have recently emerged as a principled way of combining these two paradigms.
no code implementations • NeurIPS 2017 • Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, Leonid Sigal
From this memory, the model retrieves the previous attention, taking into account recency, which is most relevant for the current question, in order to resolve potentially ambiguous references.
Ranked #13 on Visual Dialog on VisDial v0.9 val (R@1 metric)