Search Results for author: Oleg Sushkov

Found 6 papers, 1 papers with code

Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation

no code implementations ICLR 2022 Todor Davchev, Oleg Sushkov, Jean-Baptiste Regli, Stefan Schaal, Yusuf Aytar, Markus Wulfmeier, Jon Scholz

In this work, we extend hindsight relabelling mechanisms to guide exploration along task-specific distributions implied by a small set of successful demonstrations.

Continuous Control Reinforcement Learning (RL)

Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learning and Demonstrations: A Large-Scale Study

no code implementations21 Mar 2021 Jianlan Luo, Oleg Sushkov, Rugile Pevceviciute, Wenzhao Lian, Chang Su, Mel Vecerik, Ning Ye, Stefan Schaal, Jon Scholz

In this paper we define criteria for industry-oriented DRL, and perform a thorough comparison according to these criteria of one family of learning approaches, DRL from demonstration, against a professional industrial integrator on the recently established NIST assembly benchmark.

S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency

no code implementations30 Sep 2020 Mel Vecerik, Jean-Baptiste Regli, Oleg Sushkov, David Barker, Rugile Pevceviciute, Thomas Rothörl, Christopher Schuster, Raia Hadsell, Lourdes Agapito, Jonathan Scholz

In this work we advocate semantic 3D keypoints as a visual representation, and present a semi-supervised training objective that can allow instance or category-level keypoints to be trained to 1-5 millimeter-accuracy with minimal supervision.

Image Reconstruction Representation Learning

A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning

no code implementations2 Oct 2018 Mel Vecerik, Oleg Sushkov, David Barker, Thomas Rothörl, Todd Hester, Jon Scholz

Insertion is a challenging haptic and visual control problem with significant practical value for manufacturing.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.