no code implementations • 1 Jul 2024 • Jingyun Yang, Zi-ang Cao, Congyue Deng, Rika Antonova, Shuran Song, Jeannette Bohg
Building effective imitation learning methods that enable robots to learn from limited data and still generalize across diverse real-world environments is a long-standing problem in robot learning.
no code implementations • 13 May 2024 • Aaditya Prasad, Kevin Lin, Jimmy Wu, Linqi Zhou, Jeannette Bohg
Many robotic systems, such as mobile manipulators or quadrotors, cannot be equipped with high-end GPUs due to space, weight, and power constraints.
no code implementations • 23 Oct 2023 • Jingyun Yang, Max Sobol Mark, Brandon Vu, Archit Sharma, Jeannette Bohg, Chelsea Finn
We aim to enable this paradigm in robotic reinforcement learning, allowing a robot to learn a new task with little human effort by leveraging data and models from the Internet.
no code implementations • 4 Oct 2023 • Tara Sadjadpour, Rares Ambrus, Jeannette Bohg
Our main contributions include a novel fusion approach for combining camera and LiDAR sensory signals to learn affinities, and a first-of-its-kind multimodal sequential track confidence refinement technique that fuses 2D and 3D detections.
no code implementations • 29 Jun 2023 • Priya Sundaresan, Suneel Belkhale, Dorsa Sadigh, Jeannette Bohg
While natural language offers a convenient shared interface for humans and robots, enabling robots to interpret and follow language commands remains a longstanding challenge in manipulation.
no code implementations • CVPR 2023 • Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu
We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation with sight, sound, and touch.
1 code implementation • 9 May 2023 • Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser
For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios.
1 code implementation • CVPR 2023 • Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Rares Andrei Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar
We present CARTO, a novel approach for reconstructing multiple articulated objects from a single stereo RGB observation.
no code implementations • 27 Dec 2022 • Negin Heravi, Heather Culbertson, Allison M. Okamura, Jeannette Bohg
Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface.
no code implementations • 11 Nov 2022 • Kuan Fang, Toki Migimatsu, Ajay Mandlekar, Li Fei-Fei, Jeannette Bohg
ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks.
no code implementations • 8 Nov 2022 • Tara Sadjadpour, Jie Li, Rares Ambrus, Jeannette Bohg
To address these issues in a unified framework, we propose to learn shape and spatio-temporal affinities between tracks and detections in consecutive frames.
no code implementations • 4 Nov 2022 • Mengxi Li, Rika Antonova, Dorsa Sadigh, Jeannette Bohg
We demonstrate the effectiveness of our method for designing new tools in several scenarios, such as winding ropes, flipping a box and pushing peas onto a scoop in simulation.
1 code implementation • 21 Oct 2022 • Christopher Agia, Toki Migimatsu, Jiajun Wu, Jeannette Bohg
We further demonstrate how STAP can be used for task and motion planning by estimating the geometric feasibility of skill sequences provided by a task planner.
no code implementations • 22 Aug 2022 • JunYoung Gwak, Silvio Savarese, Jeannette Bohg
In this work, we present Minkowski Tracker, a sparse spatio-temporal R-CNN that jointly solves object detection and tracking.
no code implementations • 28 Jun 2022 • Rika Antonova, Jingyun Yang, Krishna Murthy Jatavallabhula, Jeannette Bohg
In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios.
no code implementations • 12 May 2022 • Negin Heravi, Ayzaan Wahid, Corey Lynch, Pete Florence, Travis Armstrong, Jonathan Tompson, Pierre Sermanet, Jeannette Bohg, Debidatta Dwibedi
Our self-supervised representations are learned by observing the agent freely interacting with different parts of the environment and is queried in two different settings: (i) policy learning and (ii) object location prediction.
no code implementations • 7 May 2022 • Nick Heppert, Toki Migimatsu, Brent Yi, Claire Chen, Jeannette Bohg
Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets.
no code implementations • 7 Apr 2022 • Priya Sundaresan, Rika Antonova, Jeannette Bohg
However, for highly deformable objects it is challenging to align the output of a simulator with the behavior of real objects.
1 code implementation • CVPR 2022 • Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu
We present ObjectFolder 2. 0, a large-scale, multisensory dataset of common household objects in the form of implicit neural representations that significantly enhances ObjectFolder 1. 0 in three aspects.
no code implementations • 9 Dec 2021 • Rika Antonova, Jingyun Yang, Priya Sundaresan, Dieter Fox, Fabio Ramos, Jeannette Bohg
Deformable object manipulation remains a challenging task in robotics research.
no code implementations • 28 Oct 2021 • Nicholas Roy, Ingmar Posner, Tim Barfoot, Philippe Beaudoin, Yoshua Bengio, Jeannette Bohg, Oliver Brock, Isabelle Depatie, Dieter Fox, Dan Koditschek, Tomas Lozano-Perez, Vikash Mansinghka, Christopher Pal, Blake Richards, Dorsa Sadigh, Stefan Schaal, Gaurav Sukhatme, Denis Therien, Marc Toussaint, Michiel Van de Panne
Machine learning has long since become a keystone technology, accelerating science and applications in a broad range of domains.
2 code implementations • 16 Aug 2021 • Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang
AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.
1 code implementation • 7 Jun 2021 • Kevin Zakka, Andy Zeng, Pete Florence, Jonathan Tompson, Jeannette Bohg, Debidatta Dwibedi
We investigate the visual cross-embodiment imitation setting, in which agents learn policies from videos of other agents (such as humans) demonstrating the same task, but with stark differences in their embodiments -- shape, actions, end-effector dynamics, etc.
1 code implementation • 26 Mar 2021 • Yifan You, Lin Shao, Toki Migimatsu, Jeannette Bohg
In this paper, we propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.
1 code implementation • 28 Dec 2020 • Alina Kloss, Georg Martius, Jeannette Bohg
In many robotic applications, it is crucial to maintain a belief about the state of a system, which serves as input for planning and decision making and provides feedback during task execution.
1 code implementation • 26 Dec 2020 • Hsu-kuang Chiu, Jie Li, Rares Ambrus, Jeannette Bohg
Second, we propose to learn a metric that combines the Mahalanobis and feature distances when comparing a track and a new detection in data association.
no code implementations • 1 Dec 2020 • Michelle A. Lee, Matthew Tan, Yuke Zhu, Jeannette Bohg
Using sensor data from multiple modalities presents an opportunity to encode redundant and complementary features that can be useful when one modality is corrupted or noisy.
1 code implementation • 18 Sep 2020 • Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg
One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.
no code implementations • 22 Jul 2020 • Mengxi Li, Dylan P. Losey, Jeannette Bohg, Dorsa Sadigh
Existing approaches to teleoperation typically assume a one-size-fits-all approach, where the designers pre-define a mapping between human inputs and robot actions, and every user must adapt to this mapping over repeated interactions.
1 code implementation • 27 May 2020 • Shushman Choudhury, Jayesh K. Gupta, Mykel J. Kochenderfer, Dorsa Sadigh, Jeannette Bohg
We consider the problem of dynamically allocating tasks to multiple agents under time window constraints and task completion uncertainty.
3 code implementations • 16 Jan 2020 • Hsu-kuang Chiu, Antonio Prioletti, Jie Li, Jeannette Bohg
Our method estimates the object states by adopting a Kalman Filter.
no code implementations • 14 Nov 2019 • Mengyuan Yan, Yilin Zhu, Ning Jin, Jeannette Bohg
Challenges in taking the state-space approach are the estimation of the high-dimensional state of a deformable object from raw images, where annotations are very expensive on real data, and finding a dynamics model that is both accurate, generalizable, and efficient to compute.
no code implementations • 12 Nov 2019 • Toki Migimatsu, Jeannette Bohg
We address the problem of applying Task and Motion Planning (TAMP) in real world environments.
no code implementations • 8 Nov 2019 • Alina Kloss, Maria Bauza, Jiajun Wu, Joshua B. Tenenbaum, Alberto Rodriguez, Jeannette Bohg
Planning contact interactions is one of the core challenges of many robotic tasks.
no code implementations • 3 Nov 2019 • Lin Shao, Toki Migimatsu, Jeannette Bohg
To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment.
no code implementations • 25 Oct 2019 • Mia Kokic, Danica Kragic, Jeannette Bohg
We develop a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape.
1 code implementation • 24 Oct 2019 • Tingguang Li, Krishnan Srinivasan, Max Qing-Hu Meng, Wenzhen Yuan, Jeannette Bohg
Finally, we show how our approach generalizes to objects of other shapes.
1 code implementation • 24 Oct 2019 • Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg
The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.
2 code implementations • ICCV 2019 • Xingyu Liu, Mengyuan Yan, Jeannette Bohg
Understanding dynamic 3D environment is crucial for robotic agents and many other applications.
no code implementations • 16 Oct 2019 • Dylan P. Losey, Mengxi Li, Jeannette Bohg, Dorsa Sadigh
When teams of robots collaborate to complete a task, communication is often necessary.
no code implementations • 28 Sep 2019 • Negin Heravi, Wenzhen Yuan, Allison M. Okamura, Jeannette Bohg
Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input.
1 code implementation • 9 Sep 2019 • Fabio Ferreira, Lin Shao, Tamim Asfour, Jeannette Bohg
The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions.
1 code implementation • 28 Jul 2019 • Michelle A. Lee, Yuke Zhu, Peter Zachares, Matthew Tan, Krishnan Srinivasan, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg
Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback.
no code implementations • 20 Jun 2019 • Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, Animesh Garg
This paper studies the effect of different action spaces in deep RL and advocates for Variable Impedance Control in End-effector Space (VICES) as an advantageous action space for constrained and contact-rich tasks.
no code implementations • ICLR 2019 • Alina Kloss, Jeannette Bohg
Recursive Bayesian Filtering algorithms address the state estimation problem, but they require a model of the process dynamics and the sensory observations as well as noise estimates that quantify the accuracy of these models.
no code implementations • 8 Mar 2019 • Mia Kokic, Danica Kragic, Jeannette Bohg
The qualitative experiments show results of pose and shape estimation of objects held by a hand "in the wild".
2 code implementations • 24 Oct 2018 • Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg
Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback.
1 code implementation • 19 Sep 2018 • Hamza Merzic, Miroslav Bogdanovic, Daniel Kappler, Ludovic Righetti, Jeannette Bohg
While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.
no code implementations • 24 Jul 2018 • Lin Shao, Ye Tian, Jeannette Bohg
We show that our method generalizes well on real-world data achieving visually better segmentation results.
1 code implementation • 14 Apr 2018 • Lin Shao, Parth Shah, Vikranth Dwaracherla, Jeannette Bohg
Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow.
no code implementations • ICLR 2018 • Wenbin Li, Jeannette Bohg, Mario Fritz
We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error.
1 code implementation • 11 Oct 2017 • Alina Kloss, Stefan Schaal, Jeannette Bohg
In this work, we investigate the advantages and limitations of neural network based learning approaches for predicting the effects of actions based on sensory input and show how analytical and learned models can be combined to leverage the best of both worlds.
no code implementations • 8 Sep 2016 • Matteo Bianchi, Jeannette Bohg, Yu Sun
This paper reports the activities and outcomes in the Workshop on Grasping and Manipulation Datasets that was organized under the International Conference on Robotics and Automation (ICRA) 2016.
no code implementations • 6 May 2016 • Alonso Marco, Philipp Hennig, Jeannette Bohg, Stefan Schaal, Sebastian Trimpe
With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data.
no code implementations • 13 Apr 2016 • Jeannette Bohg, Karol Hausman, Bharath Sankaran, Oliver Brock, Danica Kragic, Stefan Schaal, Gaurav Sukhatme
Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.
Robotics
1 code implementation • 19 Feb 2016 • Jan Issac, Manuel Wüthrich, Cristina Garcia Cifuentes, Jeannette Bohg, Sebastian Trimpe, Stefan Schaal
To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand.
no code implementations • 14 Sep 2015 • Manuel Wüthrich, Cristina Garcia Cifuentes, Sebastian Trimpe, Franziska Meier, Jeannette Bohg, Jan Issac, Stefan Schaal
The contribution of this paper is to show that any Gaussian filter can be made compatible with fat-tailed sensor models by applying one simple change: Instead of filtering with the physical measurement, we propose to filter with a pseudo measurement obtained by applying a feature function to the physical measurement.
no code implementations • 10 Sep 2013 • Jeannette Bohg, Antonio Morales, Tamim Asfour, Danica Kragic
In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation.
Robotics