Search Results for author: James Harrison

Found 28 papers, 16 papers with code

Learning Sampling Distributions for Robot Motion Planning

2 code implementations16 Sep 2017 Brian Ichter, James Harrison, Marco Pavone

This paper proposes a methodology for non-uniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling.

Collision Avoidance Motion Planning

BaRC: Backward Reachability Curriculum for Robotic Reinforcement Learning

1 code implementation16 Jun 2018 Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, Marco Pavone

Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance.

Continuous Control reinforcement-learning +1

Meta-Learning Priors for Efficient Online Bayesian Regression

3 code implementations24 Jul 2018 James Harrison, Apoorva Sharma, Marco Pavone

However, this approach suffers from two main drawbacks: (1) it is computationally inefficient, as computation scales poorly with the number of samples; and (2) it can be data inefficient, as encoding prior knowledge that can aid the model through the choice of kernel and associated hyperparameters is often challenging and unintuitive.

Meta-Learning regression

Robust and Adaptive Planning under Model Uncertainty

no code implementations9 Jan 2019 Apoorva Sharma, James Harrison, Matthew Tsao, Marco Pavone

The first, RAMCP-F, converges to an optimal risk-sensitive policy without having to rebuild the search tree as the underlying belief over models is perturbed.

Computational Efficiency Decision Making

Network Offloading Policies for Cloud Robotics: a Learning-based Approach

no code implementations15 Feb 2019 Sandeep Chinchali, Apoorva Sharma, James Harrison, Amine Elhafsi, Daniel Kang, Evgenya Pergament, Eyal Cidon, Sachin Katti, Marco Pavone

In this paper, we formulate a novel Robot Offloading Problem --- how and when should robots offload sensing tasks, especially if they are uncertain, to improve accuracy while minimizing the cost of cloud communication?

Decision Making object-detection +1

Continuous Meta-Learning without Tasks

2 code implementations NeurIPS 2020 James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone

In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.

Image Classification Meta-Learning +2

Deep Reinforcement Learning amidst Lifelong Non-Stationarity

no code implementations ICML Workshop LifelongML 2020 Annie Xie, James Harrison, Chelsea Finn

As humans, our goals and our environment are persistently changing throughout our lifetime based on our experiences, actions, and internal and external drives.

reinforcement-learning Reinforcement Learning (RL)

Safe Active Dynamics Learning and Control: A Sequential Exploration-Exploitation Framework

no code implementations26 Aug 2020 Thomas Lew, Apoorva Sharma, James Harrison, Andrew Bylard, Marco Pavone

In this work, we propose a practical and theoretically-justified approach to maintaining safety in the presence of dynamics uncertainty.

Meta-Learning Meta Reinforcement Learning

Sparse Longitudinal Representations of Electronic Health Record Data for the Early Detection of Chronic Kidney Disease in Diabetic Patients

no code implementations9 Nov 2020 Jinghe Zhang, Kamran Kowsari, Mehdi Boukhechba, James Harrison, Jennifer Lobo, Laura Barnes

Chronic kidney disease (CKD) is a gradual loss of renal function over time, and it increases the risk of mortality, decreased quality of life, as well as serious complications.

Particle MPC for Uncertain and Learning-Based Control

no code implementations6 Apr 2021 Robert Dyro, James Harrison, Apoorva Sharma, Marco Pavone

As robotic systems move from highly structured environments to open worlds, incorporating uncertainty from dynamics learning or state estimation into the control pipeline is essential for robust performance.

Model-based Reinforcement Learning Model Predictive Control

Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty

no code implementations16 Apr 2021 Rohan Sinha, James Harrison, Spencer M. Richards, Marco Pavone

We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component.

Model Predictive Control

Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems

1 code implementation23 Apr 2021 Daniele Gammelli, Kaidi Yang, James Harrison, Filipe Rodrigues, Francisco C. Pereira, Marco Pavone

Autonomous mobility-on-demand (AMoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of robotic, self-driving vehicles.

Decision Making reinforcement-learning +1

Bayesian Embeddings for Few-Shot Open World Recognition

no code implementations29 Jul 2021 John Willes, James Harrison, Ali Harakeh, Chelsea Finn, Marco Pavone, Steven Waslander

As autonomous decision-making agents move from narrow operating environments to unstructured worlds, learning systems must move from a closed-world formulation to an open-world and few-shot setting in which agents continuously learn new classes from small amounts of information.

Decision Making Few-Shot Learning

On the Problem of Reformulating Systems with Uncertain Dynamics as a Stochastic Differential Equation

no code implementations11 Nov 2021 Thomas Lew, Apoorva Sharma, James Harrison, Edward Schmerling, Marco Pavone

We identify an issue in recent approaches to learning-based control that reformulate systems with uncertain dynamics using a stochastic differential equation.

Graph Meta-Reinforcement Learning for Transferable Autonomous Mobility-on-Demand

1 code implementation15 Feb 2022 Daniele Gammelli, Kaidi Yang, James Harrison, Filipe Rodrigues, Francisco C. Pereira, Marco Pavone

Autonomous Mobility-on-Demand (AMoD) systems represent an attractive alternative to existing transportation paradigms, currently challenged by urbanization and increasing travel needs.

Meta Reinforcement Learning reinforcement-learning +1

Practical tradeoffs between memory, compute, and performance in learned optimizers

1 code implementation22 Mar 2022 Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein

We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work.

A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases

1 code implementation22 Sep 2022 James Harrison, Luke Metz, Jascha Sohl-Dickstein

We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer -- at matched optimizer computational overhead -- with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on.

Inductive Bias

Expanding the Deployment Envelope of Behavior Prediction via Adaptive Meta-Learning

2 code implementations23 Sep 2022 Boris Ivanovic, James Harrison, Marco Pavone

Learning-based behavior prediction methods are increasingly being deployed in real-world autonomous systems, e. g., in fleets of self-driving vehicles, which are beginning to commercially operate in major cities across the world.

Meta-Learning regression

VeLO: Training Versatile Learned Optimizers by Scaling Up

1 code implementation17 Nov 2022 Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agrawal, Ben Poole, Igor Mordatch, Adam Roberts, Jascha Sohl-Dickstein

While deep learning models have replaced hand-designed features across many domains, these models are still trained with hand-designed optimizers.

Adaptive Robust Model Predictive Control via Uncertainty Cancellation

no code implementations2 Dec 2022 Rohan Sinha, James Harrison, Spencer M. Richards, Marco Pavone

We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component.

Meta-Learning Model Predictive Control

General-Purpose In-Context Learning by Meta-Learning Transformers

no code implementations8 Dec 2022 Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz

We further show that the capabilities of meta-trained algorithms are bottlenecked by the accessible state size (memory) determining the next prediction, unlike standard models which are thought to be bottlenecked by parameter count.

In-Context Learning Inductive Bias +1

Hybrid Multi-agent Deep Reinforcement Learning for Autonomous Mobility on Demand Systems

1 code implementation14 Dec 2022 Tobias Enders, James Harrison, Marco Pavone, Maximilian Schiffer

We consider the sequential decision-making problem of making proactive request assignment and rejection decisions for a profit-maximizing operator of an autonomous mobility on demand system.

Decision Making reinforcement-learning +1

Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies

1 code implementation NeurIPS 2023 Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virginia Smith, Luke Metz

Unrolled computation graphs are prevalent throughout machine learning but present challenges to automatic differentiation (AD) gradient estimation methods when their loss functions exhibit extreme local sensitivtiy, discontinuity, or blackbox characteristics.

Graph Reinforcement Learning for Network Control via Bi-Level Optimization

1 code implementation16 May 2023 Daniele Gammelli, James Harrison, Kaidi Yang, Marco Pavone, Filipe Rodrigues, Francisco C. Pereira

Optimization problems over dynamic networks have been extensively studied and widely used in the past decades to formulate numerous real-world problems.

reinforcement-learning

Universal Neural Functionals

1 code implementation7 Feb 2024 Allan Zhou, Chelsea Finn, James Harrison

A challenging problem in many modern machine learning tasks is to process weight-space features, i. e., to transform or extract information from the weights and gradients of a neural network.

Risk-Sensitive Soft Actor-Critic for Robust Deep Reinforcement Learning under Distribution Shifts

1 code implementation15 Feb 2024 Tobias Enders, James Harrison, Maximilian Schiffer

We study the robustness of deep reinforcement learning algorithms against distribution shifts within contextual multi-stage stochastic combinatorial optimization problems from the operations research domain.

Combinatorial Optimization reinforcement-learning

Variational Bayesian Last Layers

1 code implementation17 Apr 2024 James Harrison, John Willes, Jasper Snoek

We introduce a deterministic variational formulation for training Bayesian last layer neural networks.

Out-of-Distribution Detection Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.