Search Results for author: Animesh Garg

Found 100 papers, 36 papers with code

Weakly supervised 3D Reconstruction with Adversarial Constraint

2 code implementations31 May 2017 JunYoung Gwak, Christopher B. Choy, Animesh Garg, Manmohan Chandraker, Silvio Savarese

Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks.

3D Reconstruction

Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger

1 code implementation22 Aug 2021 Arthur Allshire, Mayank Mittal, Varun Lodaya, Viktor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, Animesh Garg

We present a system for learning a challenging dexterous manipulation task involving moving a cube to an arbitrary 6-DoF pose with only 3-fingers trained with NVIDIA's IsaacGym simulator.

Position

Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation

1 code implementation18 Mar 2021 Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, Animesh Garg

A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles.

Object

Solving Physics Puzzles by Reasoning about Paths

2 code implementations14 Nov 2020 Augustin Harter, Andrew Melnik, Gaurav Kumar, Dhruv Agarwal, Animesh Garg, Helge Ritter

We propose a new deep learning model for goal-driven tasks that require intuitive physical reasoning and intervention in the scene to achieve a desired end goal.

Object

D2RL: Deep Dense Architectures in Reinforcement Learning

4 code implementations19 Oct 2020 Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, Animesh Garg

While improvements in deep learning architectures have played a crucial role in improving the state of supervised and unsupervised learning in computer vision and natural language processing, neural network architecture choices for reinforcement learning remain relatively under-explored.

reinforcement-learning Reinforcement Learning (RL)

A Programmable Approach to Neural Network Compression

1 code implementation6 Nov 2019 Vinu Joseph, Saurav Muralidharan, Animesh Garg, Michael Garland, Ganesh Gopalakrishnan

Deep neural networks (DNNs) frequently contain far more weights, represented at a higher precision, than are required for the specific task which they are trained to perform.

Bayesian Optimization Image Classification +3

X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval

1 code implementation CVPR 2022 Satya Krishna Gorti, Noel Vouitsis, Junwei Ma, Keyvan Golestan, Maksims Volkovs, Animesh Garg, Guangwei Yu

Instead, texts often capture sub-regions of entire videos and are most semantically similar to certain frames within videos.

Ranked #17 on Video Retrieval on LSMDC (using extra training data)

Retrieval Text to Video Retrieval +1

Counterfactual Data Augmentation using Locally Factored Dynamics

1 code implementation NeurIPS 2020 Silviu Pitis, Elliot Creager, Animesh Garg

Many dynamic processes, including common scenarios in robotic control and reinforcement learning (RL), involve a set of interacting subprocesses.

counterfactual Data Augmentation +5

SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models

1 code implementation12 Oct 2022 Ziyi Wu, Nikita Dvornik, Klaus Greff, Thomas Kipf, Animesh Garg

While recent object-centric models can successfully decompose a scene into objects, modeling their dynamics effectively still remains a challenge.

Object Question Answering +2

Causal Discovery in Physical Systems from Videos

1 code implementation NeurIPS 2020 Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, Animesh Garg

We assume access to different configurations and environmental conditions, i. e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions.

Causal Discovery counterfactual

Curriculum By Smoothing

2 code implementations NeurIPS 2020 Samarth Sinha, Animesh Garg, Hugo Larochelle

We propose to augment the train-ing of CNNs by controlling the amount of high frequency information propagated within the CNNs as training progresses, by convolving the output of a CNN feature map of each layer with a Gaussian kernel.

Image Classification Transfer Learning

A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution

1 code implementation12 Jul 2021 Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, Yoav Artzi

Natural language provides an accessible and expressive interface to specify long-term tasks for robotic agents.

Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning

1 code implementation ICLR 2022 Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang

We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL.

D4RL Offline RL +3

Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition

1 code implementation18 May 2021 Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Animashree Anandkumar

Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.

Multi-agent Reinforcement Learning reinforcement-learning +3

Experience Replay with Likelihood-free Importance Weights

1 code implementation23 Jun 2020 Samarth Sinha, Jiaming Song, Animesh Garg, Stefano Ermon

The use of past experiences to accelerate temporal difference (TD) learning of value functions, or experience replay, is a key component in deep reinforcement learning.

OpenAI Gym reinforcement-learning +1

Value Iteration in Continuous Actions, States and Time

1 code implementation10 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

This algorithm enables dynamic programming for continuous states and actions with a known dynamics model.

Robust Value Iteration for Continuous Control Tasks

1 code implementation25 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

The adversarial perturbations encourage a optimal policy that is robust to changes in the dynamics.

Continuous Control reinforcement-learning +1

Continuous-Time Fitted Value Iteration for Robust Policies

1 code implementation5 Oct 2021 Michael Lutter, Boris Belousov, Shie Mannor, Dieter Fox, Animesh Garg, Jan Peters

Especially for continuous control, solving this differential equation and its extension the Hamilton-Jacobi-Isaacs equation, is important as it yields the optimal policy that achieves the maximum reward on a give task.

Continuous Control

Principled Exploration via Optimistic Bootstrapping and Backward Induction

1 code implementation13 May 2021 Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng Liu, Zhaoran Wang

In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I).

Efficient Exploration Reinforcement Learning (RL)

MoCoDA: Model-based Counterfactual Data Augmentation

1 code implementation20 Oct 2022 Silviu Pitis, Elliot Creager, Ajay Mandlekar, Animesh Garg

To this end, we show that (1) known local structure in the environment transitions is sufficient for an exponential reduction in the sample complexity of training a dynamics model, and (2) a locally factored dynamics model provably generalizes out-of-distribution to unseen states and actions.

counterfactual Data Augmentation +2

Neural Task Programming: Learning to Generalize Across Hierarchical Tasks

1 code implementation4 Oct 2017 Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, Silvio Savarese

In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction.

Few-Shot Learning Program induction +1

Diversity inducing Information Bottleneck in Model Ensembles

1 code implementation10 Mar 2020 Samarth Sinha, Homanga Bharadhwaj, Anirudh Goyal, Hugo Larochelle, Animesh Garg, Florian Shkurti

Although deep learning models have achieved state-of-the-art performance on a number of vision tasks, generalization over high dimensional multi-modal data, and reliable predictive uncertainty estimation are still active areas of research.

Out-of-Distribution Detection

Dynamic Bottleneck for Robust Self-Supervised Exploration

1 code implementation NeurIPS 2021 Chenjia Bai, Lingxiao Wang, Lei Han, Animesh Garg, Jianye Hao, Peng Liu, Zhaoran Wang

Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards.

Benchmarks for Physical Reasoning AI

1 code implementation17 Dec 2023 Andrew Melnik, Robin Schiewer, Moritz Lange, Andrei Muresanu, Mozhgan Saeidi, Animesh Garg, Helge Ritter

Therefore, we aim to offer an overview of existing benchmarks and their solution approaches and propose a unified perspective for measuring the physical reasoning capacity of AI systems.

OCEAN: Online Task Inference for Compositional Tasks with Context Adaptation

1 code implementation17 Aug 2020 Hongyu Ren, Yuke Zhu, Jure Leskovec, Anima Anandkumar, Animesh Garg

We propose a variational inference framework OCEAN to perform online task inference for compositional tasks.

Variational Inference

Centralized Model and Exploration Policy for Multi-Agent RL

1 code implementation14 Jul 2021 Qizhen Zhang, Chris Lu, Animesh Garg, Jakob Foerster

We also learn a centralized exploration policy within our model that learns to collect additional data in state-action regions with high model uncertainty.

Reinforcement Learning (RL)

Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward

1 code implementation30 Apr 2023 Zihan Zhou, Animesh Garg

We propose Structured Exploration with Achievements (SEA), a multi-stage reinforcement learning algorithm designed for achievement-based environments, a particular type of environment with an internal achievement set.

reinforcement-learning

C-Learning: Horizon-Aware Cumulative Accessibility Estimation

1 code implementation ICLR 2021 Panteha Naderian, Gabriel Loaiza-Ganem, Harry J. Braviner, Anthony L. Caterini, Jesse C. Cresswell, Tong Li, Animesh Garg

In order to address these limitations, we introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.

Continuous Control Motion Planning

Composing Meta-Policies for Autonomous Driving Using Hierarchical Deep Reinforcement Learning

no code implementations4 Nov 2017 Richard Liaw, Sanjay Krishnan, Animesh Garg, Daniel Crankshaw, Joseph E. Gonzalez, Ken Goldberg

We explore how Deep Neural Networks can represent meta-policies that switch among a set of previously learned policies, specifically in settings where the dynamics of a new scenario are composed of a mixture of previously learned dynamics and where the state observation is possibly corrupted by sensing noise.

Autonomous Driving reinforcement-learning +1

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

no code implementations11 Aug 2017 Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christopher Choy, Silvio Savarese

We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins.

3D Reconstruction 3D Shape Reconstruction +1

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

no code implementations25 Jun 2018 Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese

We perform both simulated and real-world experiments on two tool-based manipulation tasks: sweeping and hammering.

Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

no code implementations CVPR 2019 De-An Huang, Suraj Nair, Danfei Xu, Yuke Zhu, Animesh Garg, Li Fei-Fei, Silvio Savarese, Juan Carlos Niebles

We hypothesize that to successfully generalize to unseen complex tasks from a single video demonstration, it is necessary to explicitly incorporate the compositional structure of the tasks into the model.

RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation

no code implementations7 Nov 2018 Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei

Imitation Learning has empowered recent advances in learning robotic manipulation tasks by addressing shortcomings of Reinforcement Learning such as exploration and reward specification.

Imitation Learning

Finding "It": Weakly-Supervised Reference-Aware Visual Grounding in Instructional Videos

no code implementations CVPR 2018 De-An Huang, Shyamal Buch, Lucio Dery, Animesh Garg, Li Fei-Fei, Juan Carlos Niebles

In this work, we propose to tackle this new task with a weakly-supervised framework for reference-aware visual grounding in instructional videos, where only the temporal alignment between the transcription and the video segment are available for supervision.

Multiple Instance Learning Sentence +1

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

no code implementations20 Jun 2019 Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, Animesh Garg

This paper studies the effect of different action spaces in deep RL and advocates for Variable Impedance Control in End-effector Space (VICES) as an advantageous action space for constrained and contact-rich tasks.

Reinforcement Learning (RL)

Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning

no code implementations16 Aug 2019 De-An Huang, Danfei Xu, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei, Juan Carlos Niebles

The key technical challenge is that the symbol grounding is prone to error with limited training data and leads to subsequent symbolic planning failures.

Imitation Learning

Video Interpolation and Prediction with Unsupervised Landmarks

no code implementations6 Sep 2019 Kevin J. Shih, Aysegul Dundar, Animesh Garg, Robert Pottorf, Andrew Tao, Bryan Catanzaro

Prediction and interpolation for long-range video data involves the complex task of modeling motion trajectories for each visible object, occlusions and dis-occlusions, as well as appearance changes due to viewpoint and lighting.

Motion Interpolation Optical Flow Estimation +1

Mechanical Search: Multi-Step Retrieval of a Target Object Occluded by Clutter

no code implementations4 Mar 2019 Michael Danielczuk, Andrey Kurenkov, Ashwin Balakrishna, Matthew Matl, David Wang, Roberto Martín-Martín, Animesh Garg, Silvio Savarese, Ken Goldberg

In this paper, we formalize Mechanical Search and study a version where distractor objects are heaped over the target object in a bin.

Robotics

Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation

no code implementations29 Oct 2019 Kuan Fang, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei

The fundamental challenge of planning for multi-step manipulation is to find effective and plausible action sequences that lead to the task goal.

Variational Inference

Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity

no code implementations11 Nov 2019 Ajay Mandlekar, Jonathan Booher, Max Spero, Albert Tung, Anchit Gupta, Yuke Zhu, Animesh Garg, Silvio Savarese, Li Fei-Fei

We evaluate the quality of our platform, the diversity of demonstrations in our dataset, and the utility of our dataset via quantitative and qualitative analysis.

Robot Manipulation

IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data

no code implementations13 Nov 2019 Ajay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, Dieter Fox

For simple short-horizon manipulation tasks with modest variation in task instances, offline learning from a small set of demonstrations can produce controllers that successfully solve the task.

Robot Manipulation

Motion Reasoning for Goal-Based Imitation Learning

no code implementations13 Nov 2019 De-An Huang, Yu-Wei Chao, Chris Paxton, Xinke Deng, Li Fei-Fei, Juan Carlos Niebles, Animesh Garg, Dieter Fox

We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.

Imitation Learning Motion Planning +1

InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers

no code implementations9 Dec 2019 Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.

Conditional Image Generation Time Series +1

LEAF: Latent Exploration Along the Frontier

no code implementations21 May 2020 Homanga Bharadhwaj, Animesh Garg, Florian Shkurti

We target the challenging problem of policy learning from initial and goal states specified as images, and do not assume any access to the underlying ground-truth states of the robot and the environment.

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

no code implementations21 May 2020 Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox

Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.

Uniform Priors for Data-Efficient Transfer

no code implementations30 Jun 2020 Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg

Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge.

Domain Adaptation Meta-Learning +1

De-anonymization of authors through arXiv submissions during double-blind review

no code implementations1 Jul 2020 Homanga Bharadhwaj, Dylan Turpin, Animesh Garg, Ashton Anderson

Under two conditions: papers that are released on arXiv before the review phase and papers that are not, we examine the correlation between the reputation of their authors with the review scores and acceptance decisions.

Visuomotor Mechanical Search: Learning to Retrieve Target Objects in Clutter

no code implementations13 Aug 2020 Andrey Kurenkov, Joseph Taglic, Rohun Kulkarni, Marcus Dominguez-Kuhne, Animesh Garg, Roberto Martín-Martín, Silvio Savarese

When searching for objects in cluttered environments, it is often necessary to perform complex interactions in order to move occluding objects out of the way and fully reveal the object of interest and make it graspable.

Object Reinforcement Learning (RL) +1

Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion

no code implementations21 Sep 2020 Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Animashree Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg

We present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped (the Unitree Laikago).

reinforcement-learning Reinforcement Learning (RL)

Offline Policy Optimization with Variance Regularization

no code implementations1 Jan 2021 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Zhaoran Wang, Animesh Garg, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

A Coach-Player Framework for Dynamic Team Composition

no code implementations1 Jan 2021 Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Anima Anandkumar

The performance of our method is comparable or even better than the setting where all players have a full view of the environment, but no coach.

Zero-shot Generalization

Controlling Assistive Robots with Learned Latent Actions

no code implementations20 Sep 2019 Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh

Our insight is that we can make assistive robots easier for humans to control by leveraging latent actions.

Robotics

Conservative Safety Critics for Exploration

no code implementations ICLR 2021 Homanga Bharadhwaj, Aviral Kumar, Nicholas Rhinehart, Sergey Levine, Florian Shkurti, Animesh Garg

Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning.

Reinforcement Learning (RL) Safe Exploration

Modular Action Concept Grounding in Semantic Video Prediction

no code implementations CVPR 2022 Wei Yu, Wenxin Chen, Songhenh Yin, Steve Easterbrook, Animesh Garg

Recent works in video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the learning of interaction between agents and objects.

Action Recognition object-detection +4

Latent Skill Planning for Exploration and Transfer

no code implementations ICLR 2021 Kevin Xie, Homanga Bharadhwaj, Danijar Hafner, Animesh Garg, Florian Shkurti

To quickly solve new tasks in complex environments, intelligent agents need to build up reusable knowledge.

Emergent Hand Morphology and Control from Optimizing Robust Grasps of Diverse Objects

no code implementations22 Dec 2020 Xinlei Pan, Animesh Garg, Animashree Anandkumar, Yuke Zhu

Through experimentation and comparative study, we demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.

Bayesian Optimization MORPH

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos

no code implementations18 Jan 2021 Haoyu Xiong, Quanzhou Li, Yun-Chun Chen, Homanga Bharadhwaj, Samarth Sinha, Animesh Garg

Learning from visual data opens the potential to accrue a large range of manipulation behaviors by leveraging human demonstrations without specifying each of them mathematically, but rather through natural task specification.

Keypoint Detection Robot Manipulation +1

S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning

no code implementations10 Mar 2021 Samarth Sinha, Ajay Mandlekar, Animesh Garg

Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment.

Autonomous Driving D4RL +6

LASER: Learning a Latent Action Space for Efficient Reinforcement Learning

no code implementations29 Mar 2021 Arthur Allshire, Roberto Martín-Martín, Charles Lin, Shawn Manuel, Silvio Savarese, Animesh Garg

Additionally, similar tasks or instances of the same task family impose latent manifold constraints on the most effective action space: the task family can be best solved with actions in a manifold of the entire action space of the robot.

reinforcement-learning Reinforcement Learning (RL)

GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with a Centroidal Model

no code implementations20 Apr 2021 Zhaoming Xie, Xingye Da, Buck Babich, Animesh Garg, Michiel Van de Panne

Model-free reinforcement learning (RL) for legged locomotion commonly relies on a physics simulator that can accurately predict the behaviors of every degree of freedom of the robot.

Model Predictive Control Reinforcement Learning (RL)

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

no code implementations31 May 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

Algorithms derived from Tesseract decompose the Q-tensor across agents and utilise low-rank tensor approximations to model agent interactions relevant to the task.

Learning Theory Multi-agent Reinforcement Learning +3

Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions

no code implementations NeurIPS 2021 Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg

Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.

Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers

no code implementations NeurIPS 2021 Nikita Dvornik, Isma Hadji, Konstantinos G. Derpanis, Animesh Garg, Allan D. Jepson

In our experiments, we show that Drop-DTW is a robust similarity measure for sequence retrieval and demonstrate its effectiveness as a training loss on diverse applications.

Dynamic Time Warping Representation Learning +1

Auditing AI models for Verified Deployment under Semantic Specifications

no code implementations25 Sep 2021 Homanga Bharadhwaj, De-An Huang, Chaowei Xiao, Anima Anandkumar, Animesh Garg

We enable such unit tests through variations in a semantically-interpretable latent space of a generative model.

Face Recognition

Seeing Glass: Joint Point Cloud and Depth Completion for Transparent Objects

no code implementations30 Sep 2021 Haoping Xu, Yi Ru Wang, Sagi Eppel, Alàn Aspuru-Guzik, Florian Shkurti, Animesh Garg

To address the shortcomings of existing transparent object data collection schemes in literature, we also propose an automated dataset creation workflow that consists of robot-controlled image collection and vision-based automatic annotation.

Depth Completion Transparent objects

Generalizing Successor Features to continuous domains for Multi-task Learning

no code implementations29 Sep 2021 Melissa Mozifian, Dieter Fox, David Meger, Fabio Ramos, Animesh Garg

In this paper, we consider the problem of continuous control for various robot manipulation tasks with an explicit representation that promotes skill reuse while learning multiple tasks, related through the reward function.

Continuous Control Decision Making +3

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

no code implementations27 Oct 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions.

Multi-agent Reinforcement Learning reinforcement-learning +1

Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings

no code implementations30 Oct 2021 Matthew S. Zhang, Murat A. Erdogdu, Animesh Garg

Policy gradient methods have been frequently applied to problems in control and reinforcement learning with great success, yet existing convergence analysis still relies on non-intuitive, impractical and often opaque conditions.

Policy Gradient Methods reinforcement-learning +1

InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers

no code implementations25 Sep 2019 Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.

Conditional Image Generation Time Series +1

Action Concept Grounding Network for Semantically-Consistent Video Generation

no code implementations28 Sep 2020 Wei Yu, Wenxin Chen, Animesh Garg

Recent works in self-supervised video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the problem of semantic learning.

Action Recognition object-detection +3

Accelerated Policy Learning with Parallel Differentiable Simulation

no code implementations ICLR 2022 Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, Miles Macklin

In this work we present a high-performance differentiable simulator and a new policy learning algorithm (SHAC) that can effectively leverage simulation gradients, even in the presence of non-smoothness.

Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors

no code implementations CVPR 2022 Yun-Chun Chen, Haoda Li, Dylan Turpin, Alec Jacobson, Animesh Garg

While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit.

Object Point Cloud Registration

Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value Functions

no code implementations29 Jun 2022 Yun-Chun Chen, Adithyavairavan Murali, Balakumar Sundaralingam, Wei Yang, Animesh Garg, Dieter Fox

The pipeline of current robotic pick-and-place methods typically consists of several stages: grasp pose detection, finding inverse kinematic solutions for the detected poses, planning a collision-free trajectory, and then executing the open-loop trajectory to the grasp pose with a low-level tracking controller.

Object

ProgPrompt: Generating Situated Robot Task Plans using Large Language Models

no code implementations22 Sep 2022 Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg

To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information.

nerf2nerf: Pairwise Registration of Neural Radiance Fields

no code implementations3 Nov 2022 Lily Goli, Daniel Rebain, Sara Sabour, Animesh Garg, Andrea Tagliasacchi

We introduce a technique for pairwise registration of neural fields that extends classical optimization-based local registration (i. e. ICP) to operate on Neural Radiance Fields (NeRF) -- neural 3D scene representations trained from collections of calibrated images.

NeurIPS 2022 Competition: Driving SMARTS

no code implementations14 Nov 2022 Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba, Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam Paull, Weinan Zhang, Xinyu Wang, Xi Chen

The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods, trained on a combination of naturalistic AD data and open-source simulation platform SMARTS.

Autonomous Driving Reinforcement Learning (RL)

Offline Policy Optimization in RL with Variance Regularizaton

no code implementations29 Dec 2022 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

MVTrans: Multi-View Perception of Transparent Objects

no code implementations22 Feb 2023 Yi Ru Wang, Yuchi Zhao, Haoping Xu, Saggi Eppel, Alan Aspuru-Guzik, Florian Shkurti, Animesh Garg

Transparent object perception is a crucial skill for applications such as robot manipulation in household and laboratory settings.

Depth Estimation Object +5

Self-Supervised Learning of Action Affordances as Interaction Modes

no code implementations27 May 2023 Liquan Wang, Nikita Dvornik, Rafael Dubeau, Mayank Mittal, Animesh Garg

We show in the experiments that such affordance learning predicts interaction which covers most modes of interaction for the querying articulated object and can be fine-tuned to a goal-conditional model.

Object Self-Supervised Learning

HandyPriors: Physically Consistent Perception of Hand-Object Interactions with Differentiable Priors

no code implementations28 Nov 2023 Shutong Zhang, Yi-Ling Qiao, Guanglei Zhu, Eric Heiden, Dylan Turpin, Jingzhou Liu, Ming Lin, Miles Macklin, Animesh Garg

We demonstrate that HandyPriors attains comparable or superior results in the pose estimation task, and that the differentiable physics module can predict contact information for pose refinement.

Human-Object Interaction Detection Object +1

ORGANA: A Robotic Assistant for Automated Chemistry Experimentation and Characterization

no code implementations13 Jan 2024 Kourosh Darvish, Marta Skreta, Yuchi Zhao, Naruki Yoshikawa, Sagnik Som, Miroslav Bogdanovic, Yang Cao, Han Hao, Haoping Xu, Alán Aspuru-Guzik, Animesh Garg, Florian Shkurti

Despite the many benefits incurred by the integration of advanced and special-purpose lab equipment, many aspects of experimentation are still manually conducted by chemists, for example, polishing an electrode in electrochemistry experiments.

Scheduling

SlotDiffusion: Object-Centric Generative Modeling with Diffusion Models

no code implementations NeurIPS 2023 Ziyi Wu, Jingyu Hu, Wuyue Lu, Igor Gilitschenski, Animesh Garg

Finally, we demonstrate the scalability of SlotDiffusion to unconstrained real-world datasets such as PASCAL VOC and COCO, when integrated with self-supervised pre-trained image encoders.

Image Generation Object +5

AdaDemo: Data-Efficient Demonstration Expansion for Generalist Robotic Agent

no code implementations11 Apr 2024 Tongzhou Mu, Yijie Guo, Jie Xu, Ankit Goyal, Hao Su, Dieter Fox, Animesh Garg

Encouraged by the remarkable achievements of language and vision foundation models, developing generalist robotic agents through imitation learning, using large demonstration datasets, has become a prominent area of interest in robot learning.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.