no code implementations • 13 Oct 2022 • Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar, Ken Goldberg
In physical experiments, we find a 13. 7% increase in success rate, a 1. 6x increase in picks per hour, and a 6. 3x decrease in grasp planning time compared to prior work on multi-object grasping.
no code implementations • 1 Jun 2022 • Wisdom C. Agboh, Jeffrey Ichnowski, Ken Goldberg, Mehmet R. Dogar
In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13. 6\% higher grasp success and is 59. 9\% faster, from 212 PPH to 340 PPH.
no code implementations • 6 Nov 2020 • Wissam Bejjani, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti
Solving this task requires reasoning over the likely locations of the target object.
1 code implementation • 28 Feb 2020 • Mohamed Hasan, Matthew Warburton, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti, He Wang, Faisal Mushtaq, Mark Mon-Williams, Anthony G. Cohn
From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles.
no code implementations • 17 Oct 2018 • Leo Pauly, Wisdom C. Agboh, David C. Hogg, Raul Fuentes
The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task.