Search Results for author: Ricky T. Q. Chen

Found 38 papers, 21 papers with code

"Hey, that's not an ODE": Faster ODE Adjoints via Seminorms

3 code implementations20 Sep 2020 Patrick Kidger, Ricky T. Q. Chen, Terry Lyons

Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.

Time Series Time Series Analysis

Isolating Sources of Disentanglement in Variational Autoencoders

10 code implementations NeurIPS 2018 Ricky T. Q. Chen, Xuechen Li, Roger Grosse, David Duvenaud

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables.

Disentanglement

Theseus: A Library for Differentiable Nonlinear Optimization

1 code implementation19 Jul 2022 Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam

We present Theseus, an efficient application-agnostic open source library for differentiable nonlinear least squares (DNLS) optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision.

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models

7 code implementations ICLR 2019 Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, David Duvenaud

The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.

Density Estimation Image Generation +1

Invertible Residual Networks

5 code implementations2 Nov 2018 Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen

We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.

Density Estimation General Classification +1

Latent ODEs for Irregularly-Sampled Time Series

12 code implementations8 Jul 2019 Yulia Rubanova, Ricky T. Q. Chen, David Duvenaud

Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs).

Multivariate Time Series Forecasting Multivariate Time Series Imputation +3

Flow Matching for Generative Modeling

1 code implementation6 Oct 2022 Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matt Le

These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization.

Density Estimation

Residual Flows for Invertible Generative Modeling

4 code implementations NeurIPS 2019 Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen

Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood.

Density Estimation Image Generation

Flow Matching on General Geometries

2 code implementations7 Feb 2023 Ricky T. Q. Chen, Yaron Lipman

To extend to general geometries, we rely on the use of spectral decompositions to efficiently compute premetrics on the fly.

Neural Spatio-Temporal Point Processes

1 code implementation ICLR 2021 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space.

Epidemiology Point Processes

Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory optimization

2 code implementations4 Oct 2023 Dinghuai Zhang, Ricky T. Q. Chen, Cheng-Hao Liu, Aaron Courville, Yoshua Bengio

We tackle the problem of sampling from intractable high-dimensional density functions, a fundamental task that often appears in machine learning and statistics.

Neural Conservation Laws: A Divergence-Free Perspective

1 code implementation4 Oct 2022 Jack Richter-Powell, Yaron Lipman, Ricky T. Q. Chen

We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law.

Semi-Discrete Normalizing Flows through Differentiable Tessellation

1 code implementation14 Mar 2022 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches.

Quantization

Latent State Marginalization as a Low-cost Approach for Improving Exploration

1 code implementation3 Oct 2022 Dinghuai Zhang, Aaron Courville, Yoshua Bengio, Qinqing Zheng, Amy Zhang, Ricky T. Q. Chen

While the maximum entropy (MaxEnt) reinforcement learning (RL) framework -- often touted for its exploration and robustness capabilities -- is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity.

Continuous Control Reinforcement Learning (RL) +1

Stochastic Optimal Control Matching

1 code implementation4 Dec 2023 Carles Domingo-Enrich, Jiequn Han, Brandon Amos, Joan Bruna, Ricky T. Q. Chen

Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models.

Philosophy

Distributional GFlowNets with Quantile Flows

1 code implementation11 Feb 2023 Dinghuai Zhang, Ling Pan, Ricky T. Q. Chen, Aaron Courville, Yoshua Bengio

Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a stochastic policy for generating complex combinatorial structure through a series of decision-making steps.

Decision Making

Neural Networks with Cheap Differential Operators

no code implementations8 Dec 2019 Ricky T. Q. Chen, David Duvenaud

Gradients of neural networks can be computed efficiently for any architecture, but some applications require differential operators with higher time complexity.

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models

no code implementations ICLR 2020 Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen

Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.

"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code

no code implementations1 Jan 2021 Patrick Kidger, Ricky T. Q. Chen, Terry Lyons

Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver.

Time Series Time Series Analysis

Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering

no code implementations NeurIPS Workshop ICBINB 2020 Ricky T. Q. Chen, Dami Choi, Lukas Balles, David Duvenaud, Philipp Hennig

Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters.

Stochastic Optimization

Scalable Gradients and Variational Inference for Stochastic Differential Equations

no code implementations pproximateinference AABI Symposium 2019 Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David K. Duvenaud

We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick.

Time Series Time Series Analysis +1

Matching Normalizing Flows and Probability Paths on Manifolds

no code implementations11 Jul 2022 Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).

Unifying Generative Models with GFlowNets and Beyond

no code implementations6 Sep 2022 Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, Yoshua Bengio

Our framework provides a means for unifying training and inference algorithms, and provides a route to shine a unifying light over many generative models.

Decision Making

Latent Discretization for Continuous-time Sequence Compression

no code implementations28 Dec 2022 Ricky T. Q. Chen, Matthew Le, Matthew Muckley, Maximilian Nickel, Karen Ullrich

We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.

Multisample Flow Matching: Straightening Flows with Minibatch Couplings

no code implementations28 Apr 2023 Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen

Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.

On Kinetic Optimal Probability Paths for Generative Models

no code implementations11 Jun 2023 Neta Shaul, Ricky T. Q. Chen, Maximilian Nickel, Matt Le, Yaron Lipman

We investigate Kinetic Optimal (KO) Gaussian paths and offer the following observations: (i) We show the KE takes a simplified form on the space of Gaussian paths, where the data is incorporated only through a single, one dimensional scalar function, called the \emph{data separation function}.

Generalized Schrödinger Bridge Matching

no code implementations3 Oct 2023 Guan-Horng Liu, Yaron Lipman, Maximilian Nickel, Brian Karrer, Evangelos A. Theodorou, Ricky T. Q. Chen

Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions.

Bespoke Solvers for Generative Flow Models

no code implementations29 Oct 2023 Neta Shaul, Juan Perez, Ricky T. Q. Chen, Ali Thabet, Albert Pumarola, Yaron Lipman

For example, a Bespoke solver for a CIFAR10 model produces samples with Fr\'echet Inception Distance (FID) of 2. 73 with 10 NFE, and gets to 1% of the Ground Truth (GT) FID (2. 59) for this model with only 20 NFE.

Guided Flows for Generative Modeling and Decision Making

no code implementations22 Nov 2023 Qinqing Zheng, Matt Le, Neta Shaul, Yaron Lipman, Aditya Grover, Ricky T. Q. Chen

Classifier-free guidance is a key component for enhancing the performance of conditional generative models across diverse tasks.

Conditional Image Generation Decision Making +3

TaskMet: Task-Driven Metric Learning for Model Learning

no code implementations NeurIPS 2023 Dishank Bansal, Ricky T. Q. Chen, Mustafa Mukadam, Brandon Amos

We propose take the task loss signal one level deeper than the parameters of the model and use it to learn the parameters of the loss function the model is trained on, which can be done by learning a metric in the prediction space.

Metric Learning Portfolio Optimization

Reflected Schrödinger Bridge for Constrained Generative Modeling

no code implementations6 Jan 2024 Wei Deng, Yu Chen, Nicole Tianjiao Yang, Hengrong Du, Qi Feng, Ricky T. Q. Chen

Diffusion models have become the go-to method for large-scale generative models in real-world applications.

Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models

no code implementations2 Mar 2024 Neta Shaul, Uriel Singer, Ricky T. Q. Chen, Matthew Le, Ali Thabet, Albert Pumarola, Yaron Lipman

This paper introduces Bespoke Non-Stationary (BNS) Solvers, a solver distillation approach to improve sample efficiency of Diffusion and Flow models.

Audio Generation Conditional Image Generation +1

Training-free Linear Image Inverses via Flows

no code implementations25 Sep 2023 Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, Brian Karrer

Solving inverse problems without any training involves using a pretrained generative model and making appropriate modifications to the generation process to avoid finetuning of the generative model.

Cannot find the paper you are looking for? You can Submit a new open access paper.