Search Results for author: Wisdom C. Agboh

Found 5 papers, 1 papers with code

Learning to Efficiently Plan Robust Frictional Multi-Object Grasps

no code implementations13 Oct 2022 Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar, Ken Goldberg

In physical experiments, we find a 13. 7% increase in success rate, a 1. 6x increase in picks per hour, and a 6. 3x decrease in grasp planning time compared to prior work on multi-object grasping.

Friction Object

Multi-Object Grasping in the Plane

no code implementations1 Jun 2022 Wisdom C. Agboh, Jeffrey Ichnowski, Ken Goldberg, Mehmet R. Dogar

In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13. 6\% higher grasp success and is 59. 9\% faster, from 212 PPH to 340 PPH.

Object

Human-like Planning for Reaching in Cluttered Environments

1 code implementation28 Feb 2020 Mohamed Hasan, Matthew Warburton, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti, He Wang, Faisal Mushtaq, Mark Mon-Williams, Anthony G. Cohn

From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles.

Decision Making

O2A: One-shot Observational learning with Action vectors

no code implementations17 Oct 2018 Leo Pauly, Wisdom C. Agboh, David C. Hogg, Raul Fuentes

The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task.

One-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.