You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 6 Jun 2021 • Arvind U. Raghunathan, Devesh K. Jha, Diego Romeres

PYROBOCOP is a lightweight Python-based package for control and optimization of robotic systems described by nonlinear Differential Algebraic Equations (DAEs).

no code implementations • 20 Mar 2021 • Devesh K. Jha

On the other hand, memory estimation of the symbolic sequence helps to extract the predictive patterns in the discretized data.

no code implementations • 16 Feb 2021 • Kei Ota, Devesh K. Jha, Asako Kanezaki

Previous work has shown that this is mostly due to instability during training of deep RL agents when using larger networks.

no code implementations • 14 Nov 2020 • Kei Ota, Devesh K. Jha, Diego Romeres, Jeroen van Baar, Kevin A. Smith, Takayuki Semitsu, Tomoaki Oiki, Alan Sullivan, Daniel Nikovski, Joshua B. Tenenbaum

The physics engine augmented with the residual model is then used to control the marble in the maze environment using a model-predictive feedback over a receding horizon.

no code implementations • 31 Oct 2020 • Kei Ota, Devesh K. Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko SASAKI, Toshisada Mariyama, Daniel Nikovski

The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution.

no code implementations • 22 Jul 2020 • Yifang Liu, Diego Romeres, Devesh K. Jha, Daniel Nikovski

One of the main challenges in peg-in-a-hole (PiH) insertion tasks is in handling the uncertainty in the location of the target hole.

no code implementations • 26 Mar 2020 • Wenyu Zhang, Skyler Seto, Devesh K. Jha

The purpose of these agents is to quickly adapt and/or generalize their notion of physics of interaction in the real world based on certain features about the interacting objects that provide different contexts to the predictive models.

no code implementations • ICML 2020 • Kei Ota, Tomoaki Oiki, Devesh K. Jha, Toshisada Mariyama, Daniel Nikovski

We believe that stronger feature propagation together with larger networks (and thus larger search space) allows RL agents to learn more complex functions of states and thus improves the sample efficiency.

no code implementations • 3 Mar 2020 • Kei Ota, Yoko SASAKI, Devesh K. Jha, Yusuke Yoshiyasu, Asako Kanezaki

Specifically, we train a deep convolutional network that can predict collision-free paths based on a map of the environment-- this is then used by a reinforcement learning algorithm to learn to closely follow the path.

no code implementations • 25 Feb 2020 • Alberto Dalla Libera, Diego Romeres, Devesh K. Jha, Bill Yerazunis, Daniel Nikovski

In this paper, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR).

no code implementations • 27 Jan 2020 • Wenyu Zhang, Devesh K. Jha, Emil Laftchiev, Daniel Nikovski

In the most general setting of these types of problems, one or more samples of data across multiple time series can be assigned several concurrent fault labels from a finite, known set and the task is to predict the possibility of fault occurrence over a desired time horizon.

no code implementations • 22 Jan 2020 • Patrik Kolaric, Devesh K. Jha, Arvind U. Raghunathan, Frank L. Lewis, Mouhacine Benosman, Diego Romeres, Daniel Nikovski

Motivated by these problems, we try to formulate the problem of trajectory optimization and local policy synthesis as a single optimization problem.

no code implementations • 3 Jul 2019 • Ankush Chakrabarty, Devesh K. Jha, Gregery T. Buzzard, Yebin Wang, Kyriakos Vamvoudakis

We develop a method for obtaining safe initial policies for reinforcement learning via approximate dynamic programming (ADP) techniques for uncertain systems evolving with discrete-time dynamics.

no code implementations • 15 May 2019 • Arvind U. Raghunathan, Anoop Cherian, Devesh K. Jha

To this end, we introduce the Gradient-based Nikaido-Isoda (GNI) function which serves: (i) as a merit function, vanishing only at the first-order stationary points of each player's optimization problem, and (ii) provides error bounds to a stationary Nash point.

no code implementations • 13 Mar 2019 • Kei Ota, Devesh K. Jha, Tomoaki Oiki, Mamoru Miura, Takashi Nammoto, Daniel Nikovski, Toshisada Mariyama

Our experiments show that our RL agent trained with a reference path outperformed a model-free PID controller of the type commonly used on many robotic platforms for trajectory tracking.

no code implementations • 26 Sep 2017 • Devesh K. Jha, Nurali Virani, Jan Reimann, Abhishek Srivastav, Asok Ray

In the second example, the data set is taken from NASA's data repository for prognostics of bearings on rotating shafts.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.