Search Results for author: Ricky T. Q. Chen

Found 25 papers, 15 papers with code

Latent Discretization for Continuous-time Sequence Compression

no code implementations28 Dec 2022 Ricky T. Q. Chen, Matthew Le, Matthew Muckley, Maximilian Nickel, Karen Ullrich

We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.

Flow Matching for Generative Modeling

no code implementations6 Oct 2022 Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matt Le

We introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale.

Neural Conservation Laws: A Divergence-Free Perspective

1 code implementation4 Oct 2022 Jack Richter-Powell, Yaron Lipman, Ricky T. Q. Chen

We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law.

Latent State Marginalization as a Low-cost Approach for Improving Exploration

no code implementations3 Oct 2022 Dinghuai Zhang, Aaron Courville, Yoshua Bengio, Qinqing Zheng, Amy Zhang, Ricky T. Q. Chen

While the maximum entropy (MaxEnt) reinforcement learning (RL) framework -- often touted for its exploration and robustness capabilities -- is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity.

Continuous Control SMAC+

Unifying Generative Models with GFlowNets

no code implementations6 Sep 2022 Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, Yoshua Bengio

This provides a means for unifying training and inference algorithms, and provides a route to construct an agglomeration of generative models.

Theseus: A Library for Differentiable Nonlinear Optimization

1 code implementation19 Jul 2022 Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam

We present Theseus, an efficient application-agnostic open source library for differentiable nonlinear least squares (DNLS) optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision.

Matching Normalizing Flows and Probability Paths on Manifolds

no code implementations11 Jul 2022 Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).

Semi-Discrete Normalizing Flows through Differentiable Tessellation

1 code implementation14 Mar 2022 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches.

Quantization

"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code

no code implementations1 Jan 2021 Patrick Kidger, Ricky T. Q. Chen, Terry Lyons

Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.

Time Series

Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering

no code implementations NeurIPS Workshop ICBINB 2020 Ricky T. Q. Chen, Dami Choi, Lukas Balles, David Duvenaud, Philipp Hennig

Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters.

Stochastic Optimization

Neural Spatio-Temporal Point Processes

1 code implementation ICLR 2021 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space.

Epidemiology Point Processes

"Hey, that's not an ODE": Faster ODE Adjoints via Seminorms

2 code implementations20 Sep 2020 Patrick Kidger, Ricky T. Q. Chen, Terry Lyons

Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.

Time Series

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models

no code implementations ICLR 2020 Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen

Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.

Neural Networks with Cheap Differential Operators

no code implementations8 Dec 2019 Ricky T. Q. Chen, David Duvenaud

Gradients of neural networks can be computed efficiently for any architecture, but some applications require differential operators with higher time complexity.

Scalable Gradients and Variational Inference for Stochastic Differential Equations

no code implementations pproximateinference AABI Symposium 2019 Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David K. Duvenaud

We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick.

Time Series Variational Inference

Latent ODEs for Irregularly-Sampled Time Series

11 code implementations8 Jul 2019 Yulia Rubanova, Ricky T. Q. Chen, David Duvenaud

Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs).

Multivariate Time Series Forecasting Multivariate Time Series Imputation +1

Residual Flows for Invertible Generative Modeling

4 code implementations NeurIPS 2019 Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen

Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood.

Density Estimation Image Generation

Invertible Residual Networks

4 code implementations2 Nov 2018 Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen

We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.

Density Estimation General Classification +1

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models

7 code implementations ICLR 2019 Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, David Duvenaud

The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.

 Ranked #1 on Density Estimation on CIFAR-10 (NLL metric)

Density Estimation Image Generation +1

Isolating Sources of Disentanglement in Variational Autoencoders

9 code implementations NeurIPS 2018 Ricky T. Q. Chen, Xuechen Li, Roger Grosse, David Duvenaud

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.