Search Results for author: Brandon Amos

Found 44 papers, 32 papers with code

Neural Optimal Transport with Lagrangian Costs

1 code implementation1 Jun 2024 Aram-Alexandre Pooladian, Carles Domingo-Enrich, Ricky T. Q. Chen, Brandon Amos

We investigate the optimal transport problem between probability measures when the underlying cost function is understood to satisfy a least action principle, also known as a Lagrangian cost.

AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs

1 code implementation21 Apr 2024 Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian

While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content.

TaskMet: Task-Driven Metric Learning for Model Learning

no code implementations NeurIPS 2023 Dishank Bansal, Ricky T. Q. Chen, Mustafa Mukadam, Brandon Amos

We propose take the task loss signal one level deeper than the parameters of the model and use it to learn the parameters of the loss function the model is trained on, which can be done by learning a metric in the prediction space.

Metric Learning Portfolio Optimization

Stochastic Optimal Control Matching

1 code implementation4 Dec 2023 Carles Domingo-Enrich, Jiequn Han, Brandon Amos, Joan Bruna, Ricky T. Q. Chen

Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models.


Learning to Warm-Start Fixed-Point Optimization Algorithms

2 code implementations14 Sep 2023 Rajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo Stellato

We introduce a machine-learning framework to warm-start fixed-point optimization algorithms.

Generalization Bounds

Score Function Gradient Estimation to Widen the Applicability of Decision-Focused Learning

no code implementations11 Jul 2023 Mattia Silvestri, Senne Berden, Jayanta Mandi, Ali İrfan Mahmutoğulları, Brandon Amos, Tias Guns, Michele Lombardi

Many real-world optimization problems contain parameters that are unknown before deployment time, either due to stochasticity or to lack of information (e. g., demand or travel times in delivery problems).

Stochastic Optimization

Multisample Flow Matching: Straightening Flows with Minibatch Couplings

no code implementations28 Apr 2023 Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen

Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.

On amortizing convex conjugates for optimal transport

1 code implementation21 Oct 2022 Brandon Amos

I show that combining amortized approximations to the conjugate with a solver for fine-tuning significantly improves the quality of transport maps learned for the Wasserstein-2 benchmark by Korotin et al. (2021a) and is able to model many 2-dimensional couplings and flows considered in the literature.

Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories

1 code implementation12 Oct 2022 Qinqing Zheng, Mikael Henaff, Brandon Amos, Aditya Grover

For this setting, we develop and study a simple meta-algorithmic pipeline that learns an inverse dynamics model on the labelled data to obtain proxy-labels for the unlabelled data, followed by the use of any offline RL algorithm on the true and proxy-labelled trajectories.

D4RL Offline RL +2

Theseus: A Library for Differentiable Nonlinear Optimization

1 code implementation19 Jul 2022 Luis Pineda, Taosha Fan, Maurizio Monge, Shobha Venkataraman, Paloma Sodhi, Ricky T. Q. Chen, Joseph Ortiz, Daniel DeTone, Austin Wang, Stuart Anderson, Jing Dong, Brandon Amos, Mustafa Mukadam

We present Theseus, an efficient application-agnostic open source library for differentiable nonlinear least squares (DNLS) optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision.

Matching Normalizing Flows and Probability Paths on Manifolds

no code implementations11 Jul 2022 Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).

Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world

1 code implementation20 Jun 2022 Eugene Vinitsky, Nathan Lichtlé, Xiaomeng Yang, Brandon Amos, Jakob Foerster

We introduce Nocturne, a new 2D driving simulator for investigating multi-agent coordination under partial observability.

Imitation Learning

Meta Optimal Transport

1 code implementation10 Jun 2022 Brandon Amos, samuel cohen, Giulia Luise, Ievgen Redko

We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT.

Semi-Discrete Normalizing Flows through Differentiable Tessellation

1 code implementation14 Mar 2022 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches.


Tutorial on amortized optimization

1 code implementation1 Feb 2022 Brandon Amos

Optimization is a ubiquitous modeling tool and is often deployed in settings which repeatedly solve similar instances of the same problem.

Meta-Learning reinforcement-learning +2

Input Convex Gradient Networks

no code implementations23 Nov 2021 Jack Richter-Powell, Jonathan Lorraine, Brandon Amos

The gradients of convex functions are expressive models of non-trivial vector fields.

Cross-Domain Imitation Learning via Optimal Transport

1 code implementation ICLR 2022 Arnaud Fickinger, samuel cohen, Stuart Russell, Brandon Amos

Cross-domain imitation learning studies how to leverage expert demonstrations of one agent to train an imitation agent with a different embodiment or morphology.

Continuous Control Imitation Learning

Learning Complex Geometric Structures from Data with Deep Riemannian Manifolds

no code implementations29 Sep 2021 Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos

We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.

Neural Fixed-Point Acceleration for Convex Optimization

1 code implementation ICML Workshop AutoML 2021 Shobha Venkataraman, Brandon Amos

Fixed-point iterations are at the heart of numerical computing and are often a computational bottleneck in real-time applications that typically need a fast solution of moderate accuracy.


Riemannian Convex Potential Maps

1 code implementation18 Jun 2021 samuel cohen, Brandon Amos, Yaron Lipman

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e. g., in physics and geology.

CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints

1 code implementation5 May 2021 Anselm Paulus, Michal Rolínek, Vít Musil, Brandon Amos, Georg Martius

Bridging logical and algorithmic reasoning with modern machine learning techniques is a fundamental challenge with potentially transformative impact.

MBRL-Lib: A Modular Library for Model-based Reinforcement Learning

3 code implementations20 Apr 2021 Luis Pineda, Brandon Amos, Amy Zhang, Nathan O. Lambert, Roberto Calandra

MBRL-Lib is designed as a platform for both researchers, to easily develop, debug and compare new algorithms, and non-expert user, to lower the entry-bar of deploying state-of-the-art algorithms.

Model-based Reinforcement Learning reinforcement-learning +1

Sliced Multi-Marginal Optimal Transport

no code implementations14 Feb 2021 samuel cohen, Alexander Terenin, Yannik Pitcan, Brandon Amos, Marc Peter Deisenroth, K S Sesh Kumar

To construct this distance, we introduce a characterization of the one-dimensional multi-marginal Kantorovich problem and use it to highlight a number of properties of the sliced multi-marginal Wasserstein distance.

Density Estimation Multi-Task Learning

Neural Potts Model

no code implementations1 Jan 2021 Tom Sercu, Robert Verkuil, Joshua Meier, Brandon Amos, Zeming Lin, Caroline Chen, Jason Liu, Yann Lecun, Alexander Rives

We propose the Neural Potts Model objective as an amortized optimization problem.

Neural Spatio-Temporal Point Processes

1 code implementation ICLR 2021 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space.

Epidemiology Point Processes

Fit The Right NP-Hard Problem: End-to-end Learning of Integer Programming Constraints

no code implementations NeurIPS Workshop LMCA 2020 Anselm Paulus, Michal Rolinek, Vít Musil, Brandon Amos, Georg Martius

Bridging logical and algorithmic reasoning with modern machine learning techniques is a fundamental challenge with potentially transformative impact.

On the model-based stochastic value gradient for continuous reinforcement learning

1 code implementation28 Aug 2020 Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson

For over a decade, model-based reinforcement learning has been seen as a way to leverage control-based domain knowledge to improve the sample-efficiency of reinforcement learning agents.

Continuous Control Humanoid Control +4

Aligning Time Series on Incomparable Spaces

1 code implementation22 Jun 2020 Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth

Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.

Dynamic Time Warping Imitation Learning +2

Objective Mismatch in Model-based Reinforcement Learning

2 code implementations ICLR 2020 Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra

In our experiments, we study this objective mismatch issue and demonstrate that the likelihood of one-step ahead predictions is not always correlated with control performance.

Model-based Reinforcement Learning reinforcement-learning +1

Differentiable Convex Optimization Layers

1 code implementation NeurIPS 2019 Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, Zico Kolter

In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization.

Inductive Bias

Generalized Inner Loop Meta-Learning

3 code implementations3 Oct 2019 Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala

Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem.

Meta-Learning reinforcement-learning +1

The Differentiable Cross-Entropy Method

1 code implementation ICML 2020 Brandon Amos, Denis Yarats

We study the cross-entropy method (CEM) for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant that enables us to differentiate the output of CEM with respect to the objective function's parameters.

BIG-bench Machine Learning Continuous Control +1

The Limited Multi-Label Projection Layer

1 code implementation20 Jun 2019 Brandon Amos, Vladlen Koltun, J. Zico Kolter

We propose the Limited Multi-Label (LML) projection layer as a new primitive operation for end-to-end learning systems.

General Classification Graph Generation +1

Differentiable MPC for End-to-end Planning and Control

2 code implementations NeurIPS 2018 Brandon Amos, Ivan Dario Jimenez Rodriguez, Jacob Sacks, Byron Boots, J. Zico Kolter

We present foundations for using Model Predictive Control (MPC) as a differentiable policy class for reinforcement learning in continuous state and action spaces.

Imitation Learning Model Predictive Control

Depth-Limited Solving for Imperfect-Information Games

no code implementations NeurIPS 2018 Noam Brown, Tuomas Sandholm, Brandon Amos

This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit.

Learning Awareness Models

no code implementations ICLR 2018 Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Sergio Gómez Colmenarejo, Alistair Muldal, Tom Erez, Yuval Tassa, Nando de Freitas, Misha Denil

We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world.

Task-based End-to-end Model Learning in Stochastic Optimization

1 code implementation NeurIPS 2017 Priya L. Donti, Brandon Amos, J. Zico Kolter

With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process.

BIG-bench Machine Learning Scheduling +1

OptNet: Differentiable Optimization as a Layer in Neural Networks

6 code implementations ICML 2017 Brandon Amos, J. Zico Kolter

This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks.

Bilevel Optimization

Input Convex Neural Networks

3 code implementations ICML 2017 Brandon Amos, Lei Xu, J. Zico Kolter

We show that many existing neural network architectures can be made input-convex with a minor modification, and develop specialized optimization algorithms tailored to this setting.

Imputation Inference Optimization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.