Search Results for author: James Harrison

Found 16 papers, 7 papers with code

Practical tradeoffs between memory, compute, and performance in learned optimizers

1 code implementation22 Mar 2022 Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein

We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work.

Graph Meta-Reinforcement Learning for Transferable Autonomous Mobility-on-Demand

1 code implementation15 Feb 2022 Daniele Gammelli, Kaidi Yang, James Harrison, Filipe Rodrigues, Francisco C. Pereira, Marco Pavone

Autonomous Mobility-on-Demand (AMoD) systems represent an attractive alternative to existing transportation paradigms, currently challenged by urbanization and increasing travel needs.

Meta Reinforcement Learning reinforcement-learning

On the Problem of Reformulating Systems with Uncertain Dynamics as a Stochastic Differential Equation

no code implementations11 Nov 2021 Thomas Lew, Apoorva Sharma, James Harrison, Edward Schmerling, Marco Pavone

We identify an issue in recent approaches to learning-based control that reformulate systems with uncertain dynamics using a stochastic differential equation.

Bayesian Embeddings for Few-Shot Open World Recognition

no code implementations29 Jul 2021 John Willes, James Harrison, Ali Harakeh, Chelsea Finn, Marco Pavone, Steven Waslander

As autonomous decision-making agents move from narrow operating environments to unstructured worlds, learning systems must move from a closed-world formulation to an open-world and few-shot setting in which agents continuously learn new classes from small amounts of information.

Decision Making Few-Shot Learning

Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems

1 code implementation23 Apr 2021 Daniele Gammelli, Kaidi Yang, James Harrison, Filipe Rodrigues, Francisco C. Pereira, Marco Pavone

Autonomous mobility-on-demand (AMoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of robotic, self-driving vehicles.

Decision Making reinforcement-learning

Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty

no code implementations16 Apr 2021 Rohan Sinha, James Harrison, Spencer M. Richards, Marco Pavone

We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component.

Particle MPC for Uncertain and Learning-Based Control

no code implementations6 Apr 2021 Robert Dyro, James Harrison, Apoorva Sharma, Marco Pavone

As robotic systems move from highly structured environments to open worlds, incorporating uncertainty from dynamics learning or state estimation into the control pipeline is essential for robust performance.

Model-based Reinforcement Learning

Sparse Longitudinal Representations of Electronic Health Record Data for the Early Detection of Chronic Kidney Disease in Diabetic Patients

no code implementations9 Nov 2020 Jinghe Zhang, Kamran Kowsari, Mehdi Boukhechba, James Harrison, Jennifer Lobo, Laura Barnes

Chronic kidney disease (CKD) is a gradual loss of renal function over time, and it increases the risk of mortality, decreased quality of life, as well as serious complications.

Safe Active Dynamics Learning and Control: A Sequential Exploration-Exploitation Framework

no code implementations26 Aug 2020 Thomas Lew, Apoorva Sharma, James Harrison, Andrew Bylard, Marco Pavone

In this work, we propose a practical and theoretically-justified approach to maintaining safety in the presence of dynamics uncertainty.

Meta-Learning Meta Reinforcement Learning

Deep Reinforcement Learning amidst Lifelong Non-Stationarity

no code implementations ICML Workshop LifelongML 2020 Annie Xie, James Harrison, Chelsea Finn

As humans, our goals and our environment are persistently changing throughout our lifetime based on our experiences, actions, and internal and external drives.

online learning reinforcement-learning

Continuous Meta-Learning without Tasks

1 code implementation NeurIPS 2020 James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone

In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task.

Image Classification Meta-Learning +2

Network Offloading Policies for Cloud Robotics: a Learning-based Approach

no code implementations15 Feb 2019 Sandeep Chinchali, Apoorva Sharma, James Harrison, Amine Elhafsi, Daniel Kang, Evgenya Pergament, Eyal Cidon, Sachin Katti, Marco Pavone

In this paper, we formulate a novel Robot Offloading Problem --- how and when should robots offload sensing tasks, especially if they are uncertain, to improve accuracy while minimizing the cost of cloud communication?

Decision Making object-detection +1

Robust and Adaptive Planning under Model Uncertainty

no code implementations9 Jan 2019 Apoorva Sharma, James Harrison, Matthew Tsao, Marco Pavone

The first, RAMCP-F, converges to an optimal risk-sensitive policy without having to rebuild the search tree as the underlying belief over models is perturbed.

Decision Making

Meta-Learning Priors for Efficient Online Bayesian Regression

3 code implementations24 Jul 2018 James Harrison, Apoorva Sharma, Marco Pavone

However, this approach suffers from two main drawbacks: (1) it is computationally inefficient, as computation scales poorly with the number of samples; and (2) it can be data inefficient, as encoding prior knowledge that can aid the model through the choice of kernel and associated hyperparameters is often challenging and unintuitive.

Meta-Learning

BaRC: Backward Reachability Curriculum for Robotic Reinforcement Learning

1 code implementation16 Jun 2018 Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, Marco Pavone

Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance.

Continuous Control reinforcement-learning

Learning Sampling Distributions for Robot Motion Planning

2 code implementations16 Sep 2017 Brian Ichter, James Harrison, Marco Pavone

This paper proposes a methodology for non-uniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.