We propose simPLE (simulation to Pick Localize and PLacE) as a solution to precise pick-and-place.
We propose a system for rearranging objects in a scene to achieve a desired object-scene placing relationship, such as a book inserted in an open slot of a bookshelf.
We propose a system that leverages visual and tactile perception to unfold the cloth via grasping and sliding on edges.
This formalism is implemented in three steps: assigning a consistent local coordinate frame to the task-relevant object parts, determining the location and orientation of this coordinate frame on unseen object instances, and executing an action that brings these frames into the desired alignment.
In particular, we demonstrate that a NeRF representation of a scene can be used to train dense object descriptors.
Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors.
Specifying tasks with videos is a powerful technique towards acquiring novel and general robot skills.
The robot must choose a sequence of discrete actions, or strategy, such as whether to pick up an object, and the continuous parameters of each of those actions, such as how to grasp the object.
We present the design, implementation, and evaluation of RF-Grasp, a robotic system that can grasp fully-occluded objects in unknown and unstructured environments.
We then show that for complex real-world scenes from the LLFF dataset, iNeRF can improve NeRF by estimating the camera poses of novel images and using these images as additional training data for NeRF.
In this paper, we present an approach to tactile pose estimation from the first touch for known objects.
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation, i. e. without prior object models.
This paper develops closed-loop tactile controllers for dexterous manipulation with dual-arm robotic palms.
Robotics Systems and Control Systems and Control
Such models, however, are approximate, which limits their applicability.
This work studies the problem of shape reconstruction and object localization using a vision-based tactile sensor, GelSlim.
We explore the use of graph neural networks (GNNs) to model spatial processes in which there is no a priori graphical structure.
Physics engines play an important role in robot planning and control; however, many real-world control problems involve complex contact dynamics that cannot be characterized analytically.
In this work, we propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (images of arbitrary objects in a bin) through trial and error.
Modular meta-learning is a new framework that generalizes to unseen datasets by combining a small set of neural modules in different ways.
The output is a dense slip field which we use to detect when small areas of the contact patch start to slip (incipient slip).
Caging is a promising tool which allows a robot to manipulate an object without directly reasoning about the contact dynamics involved.
An efficient, generalizable physical simulator with universal uncertainty estimates has wide applications in robot state estimation, planning, and control.
Decades of research in control theory have shown that simple controllers, when provided with timely feedback, can control complex systems.
Skilled robotic manipulation benefits from complex synergies between non-prehensile (e. g. pushing) and prehensile (e. g. grasping) actions: pushing can help rearrange cluttered objects to make space for arms and fingers; likewise, grasping can help displace objects to make pushing movements more precise and collision-free.
3 code implementations • 3 Oct 2017 • Andy Zeng, Shuran Song, Kuan-Ting Yu, Elliott Donlon, Francois R. Hogan, Maria Bauza, Daolin Ma, Orion Taylor, Melody Liu, Eudald Romo, Nima Fazeli, Ferran Alet, Nikhil Chavan Dafle, Rachel Holladay, Isabella Morona, Prem Qu Nair, Druck Green, Ian Taylor, Weber Liu, Thomas Funkhouser, Alberto Rodriguez
Since product images are readily available for a wide range of objects (e. g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data.
On the other hand, it achieves effective sampling and accurate probabilistic propagation by relying on the GP form of the system, and the sum-of-Gaussian form of the belief.
This paper presents a data-driven approach to model planar pushing interaction to predict both the most likely outcome of a push and its expected variability.
The approach was part of the MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and picking tasks, respectively at APC 2016.
This paper presents a overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams.