Search Results for author: Jeannette Bohg

Found 56 papers, 21 papers with code

Data-Driven Grasp Synthesis - A Survey

no code implementations10 Sep 2013 Jeannette Bohg, Antonio Morales, Tamim Asfour, Danica Kragic

In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation.

Robotics

Robust Gaussian Filtering using a Pseudo Measurement

no code implementations14 Sep 2015 Manuel Wüthrich, Cristina Garcia Cifuentes, Sebastian Trimpe, Franziska Meier, Jeannette Bohg, Jan Issac, Stefan Schaal

The contribution of this paper is to show that any Gaussian filter can be made compatible with fat-tailed sensor models by applying one simple change: Instead of filtering with the physical measurement, we propose to filter with a pseudo measurement obtained by applying a feature function to the physical measurement.

Depth-Based Object Tracking Using a Robust Gaussian Filter

1 code implementation19 Feb 2016 Jan Issac, Manuel Wüthrich, Cristina Garcia Cifuentes, Jeannette Bohg, Sebastian Trimpe, Stefan Schaal

To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand.

Computational Efficiency Object +2

Interactive Perception: Leveraging Action in Perception and Perception in Action

no code implementations13 Apr 2016 Jeannette Bohg, Karol Hausman, Bharath Sankaran, Oliver Brock, Danica Kragic, Stefan Schaal, Gaurav Sukhatme

Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.

Robotics

Automatic LQR Tuning Based on Gaussian Process Global Optimization

no code implementations6 May 2016 Alonso Marco, Philipp Hennig, Jeannette Bohg, Stefan Schaal, Sebastian Trimpe

With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data.

Bayesian Optimization

Latest Datasets and Technologies Presented in the Workshop on Grasping and Manipulation Datasets

no code implementations8 Sep 2016 Matteo Bianchi, Jeannette Bohg, Yu Sun

This paper reports the activities and outcomes in the Workshop on Grasping and Manipulation Datasets that was organized under the International Conference on Robotics and Automation (ICRA) 2016.

Robotic Grasping

Combining Learned and Analytical Models for Predicting Action Effects from Sensory Data

1 code implementation11 Oct 2017 Alina Kloss, Stefan Schaal, Jeannette Bohg

In this work, we investigate the advantages and limitations of neural network based learning approaches for predicting the effects of actions based on sensory input and show how analytical and learned models can be combined to leverage the best of both worlds.

Open-Ended Question Answering

Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

no code implementations ICLR 2018 Wenbin Li, Jeannette Bohg, Mario Fritz

We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error.

reinforcement-learning Reinforcement Learning (RL)

Motion-based Object Segmentation based on Dense RGB-D Scene Flow

1 code implementation14 Apr 2018 Lin Shao, Parth Shah, Vikranth Dwaracherla, Jeannette Bohg

Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow.

Motion Segmentation Object +3

ClusterNet: 3D Instance Segmentation in RGB-D Images

no code implementations24 Jul 2018 Lin Shao, Ye Tian, Jeannette Bohg

We show that our method generalizes well on real-world data achieving visually better segmentation results.

3D Instance Segmentation Clustering +4

Leveraging Contact Forces for Learning to Grasp

1 code implementation19 Sep 2018 Hamza Merzic, Miroslav Bogdanovic, Daniel Kappler, Ludovic Righetti, Jeannette Bohg

While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.

Object

Learning to Estimate Pose and Shape of Hand-Held Objects from RGB Images

no code implementations8 Mar 2019 Mia Kokic, Danica Kragic, Jeannette Bohg

The qualitative experiments show results of pose and shape estimation of objects held by a hand "in the wild".

Image-to-Image Translation Object +1

On Learning Heteroscedastic Noise Models within Differentiable Bayes Filters

no code implementations ICLR 2019 Alina Kloss, Jeannette Bohg

Recursive Bayesian Filtering algorithms address the state estimation problem, but they require a model of the process dynamics and the sensory observations as well as noise estimates that quantify the accuracy of these models.

Decision Making

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

no code implementations20 Jun 2019 Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, Animesh Garg

This paper studies the effect of different action spaces in deep RL and advocates for Variable Impedance Control in End-effector Space (VICES) as an advantageous action space for constrained and contact-rich tasks.

Reinforcement Learning (RL)

Learning Visual Dynamics Models of Rigid Objects using Relational Inductive Biases

1 code implementation9 Sep 2019 Fabio Ferreira, Lin Shao, Tamim Asfour, Jeannette Bohg

The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions.

Inductive Bias

Learning an Action-Conditional Model for Haptic Texture Generation

no code implementations28 Sep 2019 Negin Heravi, Wenzhen Yuan, Allison M. Okamura, Jeannette Bohg

Therefore, it is challenging to model the mapping from material and user interactions to haptic feedback in a way that generalizes over many variations of the user's input.

Texture Synthesis

Learning from My Partner's Actions: Roles in Decentralized Robot Teams

no code implementations16 Oct 2019 Dylan P. Losey, Mengxi Li, Jeannette Bohg, Dorsa Sadigh

When teams of robots collaborate to complete a task, communication is often necessary.

UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands

1 code implementation24 Oct 2019 Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg

The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.

Object valid

Learning Task-Oriented Grasping from Human Activity Datasets

no code implementations25 Oct 2019 Mia Kokic, Danica Kragic, Jeannette Bohg

We develop a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape.

Object

Learning to Scaffold the Development of Robotic Manipulation Skills

no code implementations3 Nov 2019 Lin Shao, Toki Migimatsu, Jeannette Bohg

To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment.

Object-Centric Task and Motion Planning in Dynamic Environments

no code implementations12 Nov 2019 Toki Migimatsu, Jeannette Bohg

We address the problem of applying Task and Motion Planning (TAMP) in real world environments.

Motion Planning Object +2

Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects

no code implementations14 Nov 2019 Mengyuan Yan, Yilin Zhu, Ning Jin, Jeannette Bohg

Challenges in taking the state-space approach are the estimation of the high-dimensional state of a deformable object from raw images, where annotations are very expensive on real data, and finding a dynamics model that is both accurate, generalizable, and efficient to compute.

Robot Manipulation Self-Supervised Learning

Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints

1 code implementation27 May 2020 Shushman Choudhury, Jayesh K. Gupta, Mykel J. Kochenderfer, Dorsa Sadigh, Jeannette Bohg

We consider the problem of dynamically allocating tasks to multiple agents under time window constraints and task completion uncertainty.

Decision Making Decision Making Under Uncertainty +1

Learning User-Preferred Mappings for Intuitive Robot Control

no code implementations22 Jul 2020 Mengxi Li, Dylan P. Losey, Jeannette Bohg, Dorsa Sadigh

Existing approaches to teleoperation typically assume a one-size-fits-all approach, where the designers pre-define a mapping between human inputs and robot actions, and every user must adapt to this mapping over repeated interactions.

Robot Manipulation

GRAC: Self-Guided and Self-Regularized Actor-Critic

1 code implementation18 Sep 2020 Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg

One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.

Decision Making OpenAI Gym +2

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors

no code implementations1 Dec 2020 Michelle A. Lee, Matthew Tan, Yuke Zhu, Jeannette Bohg

Using sensor data from multiple modalities presents an opportunity to encode redundant and complementary features that can be useful when one modality is corrupted or noisy.

valid

Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving

1 code implementation26 Dec 2020 Hsu-kuang Chiu, Jie Li, Rares Ambrus, Jeannette Bohg

Second, we propose to learn a metric that combines the Mahalanobis and feature distances when comparing a track and a new detection in data association.

Autonomous Driving Management +5

How to Train Your Differentiable Filter

1 code implementation28 Dec 2020 Alina Kloss, Georg Martius, Jeannette Bohg

In many robotic applications, it is crucial to maintain a belief about the state of a system, which serves as input for planning and decision making and provides feedback during task execution.

Decision Making

OmniHang: Learning to Hang Arbitrary Objects using Contact Point Correspondences and Neural Collision Estimation

1 code implementation26 Mar 2021 Yifan You, Lin Shao, Toki Migimatsu, Jeannette Bohg

In this paper, we propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.

Object

XIRL: Cross-embodiment Inverse Reinforcement Learning

1 code implementation7 Jun 2021 Kevin Zakka, Andy Zeng, Pete Florence, Jonathan Tompson, Jeannette Bohg, Debidatta Dwibedi

We investigate the visual cross-embodiment imitation setting, in which agents learn policies from videos of other agents (such as humans) demonstrating the same task, but with stark differences in their embodiments -- shape, actions, end-effector dynamics, etc.

reinforcement-learning Reinforcement Learning (RL)

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer

1 code implementation CVPR 2022 Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu

We present ObjectFolder 2. 0, a large-scale, multisensory dataset of common household objects in the form of implicit neural representations that significantly enhances ObjectFolder 1. 0 in three aspects.

Object

DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation and Rendering of Deformable Objects

no code implementations7 Apr 2022 Priya Sundaresan, Rika Antonova, Jeannette Bohg

However, for highly deformable objects it is challenging to align the output of a simulator with the behavior of real objects.

Category-Independent Articulated Object Tracking with Factor Graphs

no code implementations7 May 2022 Nick Heppert, Toki Migimatsu, Brent Yi, Claire Chen, Jeannette Bohg

Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets.

Object Object Tracking

Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations

no code implementations12 May 2022 Negin Heravi, Ayzaan Wahid, Corey Lynch, Pete Florence, Travis Armstrong, Jonathan Tompson, Pierre Sermanet, Jeannette Bohg, Debidatta Dwibedi

Our self-supervised representations are learned by observing the agent freely interacting with different parts of the environment and is queried in two different settings: (i) policy learning and (ii) object location prediction.

Object Object Localization +2

Rethinking Optimization with Differentiable Simulation from a Global Perspective

no code implementations28 Jun 2022 Rika Antonova, Jingyun Yang, Krishna Murthy Jatavallabhula, Jeannette Bohg

In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios.

Bayesian Optimization

Minkowski Tracker: A Sparse Spatio-Temporal R-CNN for Joint Object Detection and Tracking

no code implementations22 Aug 2022 JunYoung Gwak, Silvio Savarese, Jeannette Bohg

In this work, we present Minkowski Tracker, a sparse spatio-temporal R-CNN that jointly solves object detection and tracking.

3D Object Detection Multi-Object Tracking +3

STAP: Sequencing Task-Agnostic Policies

no code implementations21 Oct 2022 Christopher Agia, Toki Migimatsu, Jiajun Wu, Jeannette Bohg

We further demonstrate how STAP can be used for task and motion planning by estimating the geometric feasibility of skill sequences provided by a task planner.

Motion Planning Task and Motion Planning

Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation

no code implementations4 Nov 2022 Mengxi Li, Rika Antonova, Dorsa Sadigh, Jeannette Bohg

We demonstrate the effectiveness of our method for designing new tools in several scenarios, such as winding ropes, flipping a box and pushing peas onto a scoop in simulation.

Continual Learning

ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking

no code implementations8 Nov 2022 Tara Sadjadpour, Jie Li, Rares Ambrus, Jeannette Bohg

To address these issues in a unified framework, we propose to learn shape and spatio-temporal affinities between tracks and detections in consecutive frames.

3D Multi-Object Tracking Autonomous Vehicles +1

Active Task Randomization: Learning Robust Skills via Unsupervised Generation of Diverse and Feasible Tasks

no code implementations11 Nov 2022 Kuan Fang, Toki Migimatsu, Ajay Mandlekar, Li Fei-Fei, Jeannette Bohg

ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks.

Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering

no code implementations27 Dec 2022 Negin Heravi, Heather Culbertson, Allison M. Okamura, Jeannette Bohg

Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface.

TidyBot: Personalized Robot Assistance with Large Language Models

1 code implementation9 May 2023 Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser

For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios.

The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects

no code implementations CVPR 2023 Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu

We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation with sight, sound, and touch.

Benchmarking Object +1

KITE: Keypoint-Conditioned Policies for Semantic Manipulation

no code implementations29 Jun 2023 Priya Sundaresan, Suneel Belkhale, Dorsa Sadigh, Jeannette Bohg

While natural language offers a convenient shared interface for humans and robots, enabling robots to interpret and follow language commands remains a longstanding challenge in manipulation.

Instruction Following Object

ShaSTA-Fuse: Camera-LiDAR Sensor Fusion to Model Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking

no code implementations4 Oct 2023 Tara Sadjadpour, Rares Ambrus, Jeannette Bohg

Our main contributions include a novel fusion approach for combining camera and LiDAR sensory signals to learn affinities, and a first-of-its-kind multimodal sequential track confidence refinement technique that fuses 2D and 3D detections.

3D Multi-Object Tracking Navigate +1

Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning

no code implementations23 Oct 2023 Jingyun Yang, Max Sobol Mark, Brandon Vu, Archit Sharma, Jeannette Bohg, Chelsea Finn

We aim to enable this paradigm in robotic reinforcement learning, allowing a robot to learn a new task with little human effort by leveraging data and models from the Internet.

reinforcement-learning Robot Manipulation

Cannot find the paper you are looking for? You can Submit a new open access paper.