no code implementations • 27 Nov 2024 • Neel Jawale, Byron Boots, Balakumar Sundaralingam, Mohak Bhardwaj
We investigate the problem of teaching a robot manipulator to perform dynamic non-prehensile object transport, also known as the `robot waiter' task, from a limited set of real-world demonstrations.
no code implementations • 25 Nov 2024 • Yanwei Wang, Lirui Wang, Yilun Du, Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D'Arpino, Dieter Fox, Julie Shah
Generative policies trained with human demonstrations can autonomously accomplish multimodal, long-horizon tasks.
no code implementations • 30 Sep 2023 • Jonathan Tremblay, Bowen Wen, Valts Blukis, Balakumar Sundaralingam, Stephen Tyree, Stan Birchfield
We introduce Diff-DOPE, a 6-DoF pose refiner that takes as input an image, a 3D textured model of an object, and an initial pose of the object.
2 code implementations • 25 Oct 2022 • Ankur Handa, Arthur Allshire, Viktor Makoviychuk, Aleksei Petrenko, Ritvik Singh, Jingzhou Liu, Denys Makoviichuk, Karl Van Wyk, Alexander Zhurkevich, Balakumar Sundaralingam, Yashraj Narang, Jean-Francois Lafleche, Dieter Fox, Gavriel State
Our policies are trained to adapt to a wide range of conditions in simulation.
no code implementations • 21 Oct 2022 • Zhenggang Tang, Balakumar Sundaralingam, Jonathan Tremblay, Bowen Wen, Ye Yuan, Stephen Tyree, Charles Loop, Alexander Schwing, Stan Birchfield
We present a system for collision-free control of a robot manipulator that uses only RGB views of the world.
no code implementations • 29 Jun 2022 • Yun-Chun Chen, Adithyavairavan Murali, Balakumar Sundaralingam, Wei Yang, Animesh Garg, Dieter Fox
The pipeline of current robotic pick-and-place methods typically consists of several stages: grasp pose detection, finding inverse kinematic solutions for the detected poses, planning a collision-free trajectory, and then executing the open-loop trajectory to the grasp pose with a low-level tracking controller.
no code implementations • 19 May 2022 • Yu-Wei Chao, Chris Paxton, Yu Xiang, Wei Yang, Balakumar Sundaralingam, Tao Chen, Adithyavairavan Murali, Maya Cakmak, Dieter Fox
We analyze the performance of a set of baselines and show a correlation with a real-world evaluation.
no code implementations • 11 Apr 2022 • Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, Dieter Fox
In this paper, we explore natural language as an expressive and flexible tool for robot correction.
no code implementations • 31 Mar 2022 • Wei Yang, Balakumar Sundaralingam, Chris Paxton, Iretiayo Akinola, Yu-Wei Chao, Maya Cakmak, Dieter Fox
However, how to responsively generate smooth motions to take an object from a human is still an open question.
no code implementations • 9 Nov 2021 • Andreea Bobu, Chris Paxton, Wei Yang, Balakumar Sundaralingam, Yu-Wei Chao, Maya Cakmak, Dieter Fox
Second, we treat this low-dimensional concept as an automatic labeler to synthesize a large-scale high-dimensional data set with the simulator.
1 code implementation • 30 Mar 2020 • Balakumar Sundaralingam, Tucker Hermans
We show that tactile fingertips enable in-hand dynamics estimation of low mass objects.
Robotics
no code implementations • 25 Jan 2020 • Qingkai Lu, Mark Van der Merwe, Balakumar Sundaralingam, Tucker Hermans
We can then formulate grasp planning as inferring the grasp configuration which maximizes the probability of grasp success.
Robotics
no code implementations • 9 Jan 2020 • Silvia Cruciani, Balakumar Sundaralingam, Kaiyu Hang, Vikash Kumar, Tucker Hermans, Danica Kragic
The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems.
Robotics
no code implementations • 2 Oct 2019 • Mark Van der Merwe, Qingkai Lu, Balakumar Sundaralingam, Martin Matak, Tucker Hermans
We leverage the structure of the reconstruction network to learn a grasp success classifier which serves as the objective function for a continuous grasp optimization.
8 code implementations • 27 Sep 2018 • Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield
Using synthetic data generated in this manner, we introduce a one-shot deep neural network that is able to perform competitively against a state-of-the-art network trained on a combination of real and synthetic data.
Robotics