Search Results for author: Dinesh Jayaraman

Found 40 papers, 15 papers with code

Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming

no code implementations22 Jun 2022 Chuan Wen, Jianing Qian, Jierui Lin, Jiaye Teng, Dinesh Jayaraman, Yang Gao

Across applications spanning supervised classification and sequential control, deep learning has been reported to find "shortcut" solutions that fail catastrophically under minor changes in the data distribution.

Autonomous Driving Classification +5

How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression

1 code implementation7 Jun 2022 Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, Osbert Bastani

Offline goal-conditioned reinforcement learning (GCRL) promises general-purpose skill learning in the form of reaching diverse goals from purely offline datasets.

reinforcement-learning

Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching

1 code implementation4 Feb 2022 Yecheng Jason Ma, Andrew Shen, Dinesh Jayaraman, Osbert Bastani

We propose State Matching Offline DIstribution Correction Estimation (SMODICE), a novel and versatile regression-based offline imitation learning (IL) algorithm derived via state-occupancy matching.

Imitation Learning reinforcement-learning

Transferable Visual Control Policies Through Robot-Awareness

no code implementations ICLR 2022 Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data.

Fight fire with fire: countering bad shortcuts in imitation learning with good shortcuts

no code implementations29 Sep 2021 Chuan Wen, Jianing Qian, Jierui Lin, Dinesh Jayaraman, Yang Gao

When operating in partially observed settings, it is important for a control policy to fuse information from a history of observations.

Autonomous Driving Continuous Control +2

Know Thyself: Transferable Visuomotor Control Through Robot-Awareness

no code implementations19 Jul 2021 Edward S. Hu, Kun Huang, Oleh Rybkin, Dinesh Jayaraman

Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers, even permitting zero-shot transfer onto new robots for the very first time.

Camera Calibration

Conservative Offline Distributional Reinforcement Learning

1 code implementation NeurIPS 2021 Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani

We prove that CODAC learns a conservative return distribution -- in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator.

Distributional Reinforcement Learning Offline RL +2

Keyframe-Focused Visual Imitation Learning

no code implementations11 Jun 2021 Chuan Wen, Jierui Lin, Jianing Qian, Yang Gao, Dinesh Jayaraman

Imitation learning trains control policies by mimicking pre-recorded expert demonstrations.

Continuous Control Graph Learning +1

How Are Learned Perception-Based Controllers Impacted by the Limits of Robust Control?

1 code implementation2 Apr 2021 Jingxi Xu, Bruce Lee, Nikolai Matni, Dinesh Jayaraman

The difficulty of optimal control problems has classically been characterized in terms of system properties such as minimum eigenvalues of controllability/observability gramians.

Likelihood-Based Diverse Sampling for Trajectory Forecasting

1 code implementation ICCV 2021 Yecheng Jason Ma, Jeevana Priya Inala, Dinesh Jayaraman, Osbert Bastani

We propose Likelihood-Based Diverse Sampling (LDS), a method for improving the quality and the diversity of trajectory samples from a pre-trained flow model.

Trajectory Forecasting

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

no code implementations18 Oct 2020 Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier

Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem.

reinforcement-learning

Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings

1 code implementation ICML 2020 Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, Dinesh Jayaraman

Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment.

reinforcement-learning

Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors

1 code implementation NeurIPS 2020 Karl Pertsch, Oleh Rybkin, Frederik Ebert, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

In this work we propose a framework for visual prediction and planning that is able to overcome both of these limitations.

An Exploration of Embodied Visual Exploration

1 code implementation7 Jan 2020 Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman

Embodied computer vision considers perception for robots in novel, unstructured environments.

Computer Vision

Morphology-Agnostic Visual Robotic Control

no code implementations31 Dec 2019 Brian Yang, Dinesh Jayaraman, Glen Berseth, Alexei Efros, Sergey Levine

Existing approaches for visuomotor robotic control typically require characterizing the robot in advance by calibrating the camera or performing system identification.

Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents

no code implementations25 Sep 2019 Jesse Zhang, Brian Cheung, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?

Domain Adaptation Meta Reinforcement Learning +1

Goal-Conditioned Video Prediction

no code implementations25 Sep 2019 Oleh Rybkin, Karl Pertsch, Frederik Ebert, Dinesh Jayaraman, Chelsea Finn, Sergey Levine

Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video.

Imitation Learning Video Generation +1

Emergence of Exploratory Look-Around Behaviors through Active Observation Completion

1 code implementation Science Robotics 2019 Santhosh K. Ramakrishnan, Dinesh Jayaraman, Kristen Grauman

Standard computer vision systems assume access to intelligently captured inputs (e. g., photos from a human photographer), yet autonomously capturing good observations is a major challenge in itself.

Active Observation Completion Computer Vision

Causal Confusion in Imitation Learning

1 code implementation NeurIPS 2019 Pim de Haan, Dinesh Jayaraman, Sergey Levine

Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment.

Imitation Learning

REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning

no code implementations17 May 2019 Brian Yang, Jesse Zhang, Vitchyr Pong, Sergey Levine, Dinesh Jayaraman

We envision REPLAB as a framework for reproducible research across manipulation tasks, and as a step in this direction, we define a template for a grasping benchmark consisting of a task definition, evaluation protocol, performance measures, and a dataset of 92k grasp attempts.

Computer Vision Machine Translation +1

Manipulation by Feel: Touch-Based Control with Deep Predictive Models

no code implementations11 Mar 2019 Stephen Tian, Frederik Ebert, Dinesh Jayaraman, Mayur Mudigonda, Chelsea Finn, Roberto Calandra, Sergey Levine

Touch sensing is widely acknowledged to be important for dexterous robotic manipulation, but exploiting tactile sensing for continuous, non-prehensile manipulation is challenging.

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

no code implementations28 May 2018 Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.

Robotic Grasping

ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids

no code implementations ECCV 2018 Dinesh Jayaraman, Ruohan Gao, Kristen Grauman

We introduce an unsupervised feature learning approach that embeds 3D shape information into a single-view image representation.

Object Recognition

Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks

2 code implementations CVPR 2018 Dinesh Jayaraman, Kristen Grauman

It is common to implicitly assume access to intelligently captured inputs (e. g., photos from a human photographer), yet autonomously capturing good observations is itself a major challenge.

Pano2Vid: Automatic Cinematography for Watching 360$^{\circ}$ Videos

no code implementations7 Dec 2016 Yu-Chuan Su, Dinesh Jayaraman, Kristen Grauman

AutoCam leverages NFOV web video to discriminatively identify space-time "glimpses" of interest at each time instant, and then uses dynamic programming to select optimal human-like camera trajectories.

Object-Centric Representation Learning from Unlabeled Videos

no code implementations1 Dec 2016 Ruohan Gao, Dinesh Jayaraman, Kristen Grauman

Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances.

Image Classification Representation Learning

Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion

no code implementations30 Apr 2016 Dinesh Jayaraman, Kristen Grauman

To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent's motions on its internal representation of the environment conditional on all past views.

Slow and steady feature analysis: higher order temporal coherence in video

no code implementations CVPR 2016 Dinesh Jayaraman, Kristen Grauman

While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes.

Action Recognition

Learning image representations tied to ego-motion

1 code implementation ICCV 2015 Dinesh Jayaraman, Kristen Grauman

Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images.

Autonomous Driving Scene Recognition

Zero-shot recognition with unreliable attributes

no code implementations NeurIPS 2014 Dinesh Jayaraman, Kristen Grauman

In principle, zero-shot learning makes it possible to train an object recognition model simply by specifying the category's attributes.

Object Recognition Zero-Shot Learning

Zero Shot Recognition with Unreliable Attributes

no code implementations15 Sep 2014 Dinesh Jayaraman, Kristen Grauman

In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category's attributes.

Zero-Shot Learning

Decorrelating Semantic Visual Attributes by Resisting the Urge to Share

no code implementations CVPR 2014 Dinesh Jayaraman, Fei Sha, Kristen Grauman

Existing methods to learn visual attributes are prone to learning the wrong thing---namely, properties that are correlated with the attribute of interest among training samples.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.