Search Results for author: Atabak Dehban

Found 6 papers, 3 papers with code

3DSGrasp: 3D Shape-Completion for Robotic Grasp

no code implementations2 Jan 2023 Seyed S. Mohammadi, Nuno F. Duarte, Dimitris Dimou, Yiming Wang, Matteo Taiana, Pietro Morerio, Atabak Dehban, Plinio Moreno, Alexandre Bernardino, Alessio Del Bue, Jose Santos-Victor

However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses.

Robotic Grasping

Robotic Learning the Sequence of Packing Irregular Objects from Human Demonstrations

1 code implementation4 Oct 2022 André Santos, Nuno Ferreira Duarte, Atabak Dehban, José Santos-Victor

The human demonstrations were collected using our proposed VR platform, BoxED, which is a box packaging environment for simulating real-world objects and scenarios for fast and streamlined data collection with the purpose of teaching robots.

Object

Action-conditioned Benchmarking of Robotic Video Prediction Models: a Comparative Study

1 code implementation7 Oct 2019 Manuel Serra Nunes, Atabak Dehban, Plinio Moreno, José Santos-Victor

In contrast, we argue that if these systems are to be used to guide action, necessarily, the actions the robot performs should be encoded in the predicted frames.

Benchmarking Video Prediction

Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

no code implementations9 Apr 2018 Giovanni Saponaro, Pedro Vicente, Atabak Dehban, Lorenzo Jamone, Alexandre Bernardino, José Santos-Victor

One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i. e., the ability to perceive object affordances.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.