Search Results for author: Edward Johns

Found 41 papers, 8 papers with code

Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics

no code implementations28 Mar 2024 Norman Di Palo, Edward Johns

We show that off-the-shelf text-based Transformers, with no additional training, can perform few-shot in-context visual imitation learning, mapping visual observations to action sequences that emulate the demonstrator's behaviour.

Imitation Learning

DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models

no code implementations20 Feb 2024 Norman Di Palo, Edward Johns

We propose DINOBot, a novel imitation learning framework for robot manipulation, which leverages the image-level and pixel-level capabilities of features extracted from Vision Transformers trained with DINO.

Imitation Learning Object +2

Dream2Real: Zero-Shot 3D Object Rearrangement with Vision-Language Models

no code implementations7 Dec 2023 Ivan Kapelyukh, Yifei Ren, Ignacio Alzugaray, Edward Johns

We introduce Dream2Real, a robotics framework which integrates vision-language models (VLMs) trained on 2D data into a 3D object rearrangement pipeline.

SceneScore: Learning a Cost Function for Object Arrangement

no code implementations14 Nov 2023 Ivan Kapelyukh, Edward Johns

Arranging objects correctly is a key capability for robots which unlocks a wide range of useful tasks.

Graph Neural Network Object

One-Shot Imitation Learning: A Pose Estimation Perspective

no code implementations18 Oct 2023 Pietro Vitiello, Kamil Dreczkowski, Edward Johns

In this paper, we study imitation learning under the challenging setting of: (1) only a single demonstration, (2) no further data collection, and (3) no prior task or object knowledge.

Camera Calibration Imitation Learning +2

Few-Shot In-Context Imitation Learning via Implicit Graph Alignment

no code implementations18 Oct 2023 Vitalis Vosylius, Edward Johns

Consequently, we show that this conditioning allows for in-context learning, where a robot can perform a task on a set of new objects immediately after the demonstrations, without any prior knowledge about the object class or any further training.

Few-Shot Learning Imitation Learning +1

Language Models as Zero-Shot Trajectory Generators

no code implementations17 Oct 2023 Teyun Kwon, Norman Di Palo, Edward Johns

Our conclusions raise the assumed limit of LLMs for robotics, and we reveal for the first time that LLMs do indeed possess an understanding of low-level robot control sufficient for a range of common tasks, and that they can additionally detect failures and then re-plan trajectories accordingly.

object-detection Object Detection

Where To Start? Transferring Simple Skills to Complex Environments

no code implementations12 Dec 2022 Vitalis Vosylius, Edward Johns

Robot learning provides a number of ways to teach robots simple skills, such as grasping.

Real-time Mapping of Physical Scene Properties with an Autonomous Robot Experimenter

no code implementations31 Oct 2022 Iain Haughton, Edgar Sucar, Andre Mouton, Edward Johns, Andrew J. Davison

Neural fields can be trained from scratch to represent the shape and appearance of 3D scenes efficiently.

DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics

no code implementations5 Oct 2022 Ivan Kapelyukh, Vitalis Vosylius, Edward Johns

We introduce the first work to explore web-scale diffusion models for robotics.

Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning

no code implementations6 Apr 2022 Eugene Valassakis, Georgios Papagiannis, Norman Di Palo, Edward Johns

We present DOME, a novel method for one-shot imitation learning, where a task can be learned from just a single demonstration and then be deployed immediately, without any further data collection or training.

Imitation Learning Object +1

Auto-Lambda: Disentangling Dynamic Task Relationships

1 code implementation7 Feb 2022 Shikun Liu, Stephen James, Andrew J. Davison, Edward Johns

Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.

Ranked #3 on Robot Manipulation on RLBench (Succ. Rate (10 tasks, 100 demos/task) metric)

Auxiliary Learning Meta-Learning +2

Back to Reality for Imitation Learning

no code implementations25 Nov 2021 Edward Johns

Imitation learning, and robot learning in general, emerged due to breakthroughs in machine learning, rather than breakthroughs in robotics.

BIG-bench Machine Learning Imitation Learning

Learning Multi-Stage Tasks with One Demonstration via Self-Replay

no code implementations14 Nov 2021 Norman Di Palo, Edward Johns

In this work, we introduce a novel method to learn everyday-like multi-stage tasks from a single human demonstration, without requiring any prior object knowledge.

Imitation Learning Object

My House, My Rules: Learning Tidying Preferences with Graph Neural Networks

no code implementations4 Nov 2021 Ivan Kapelyukh, Edward Johns

Robots that arrange household objects should do so according to the user's preferences, which are inherently subjective and difficult to model.

Graph Neural Network Word Embeddings

Hybrid ICP

no code implementations15 Sep 2021 Kamil Dreczkowski, Edward Johns

In this paper, we propose Hybrid ICP, a novel and flexible ICP variant which dynamically optimises both the data association method and error metric based on the live image of an object and the current ICP estimate.

Object Pose Estimation

Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across Wide Task Spaces

no code implementations24 May 2021 Eugene Valassakis, Norman Di Palo, Edward Johns

In this paper, we study the problem of zero-shot sim-to-real when the task requires both highly precise control with sub-millimetre error tolerance, and wide task space generalisation.

Motion Planning Pose Estimation

Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration

no code implementations13 May 2021 Edward Johns

We introduce a simple new method for visual imitation learning, which allows a novel robot manipulation task to be learned from a single human demonstration, without requiring any prior knowledge of the object being interacted with.

Imitation Learning Object +1

DROID: Minimizing the Reality Gap using Single-Shot Human Demonstration

no code implementations22 Feb 2021 Ya-Yen Tsai, Hui Xu, Zihan Ding, Chong Zhang, Edward Johns, Bidan Huang

One of the main challenges of transferring the policy learned in a simulated environment to real world, is the discrepancy between the dynamics of the two environments.

Robotics

PERIL: Probabilistic Embeddings for hybrid Meta-Reinforcement and Imitation Learning

no code implementations1 Jan 2021 Alvaro Prat, Edward Johns

Imitation learning is a natural way for a human to describe a task to an agent, and it can be combined with reinforcement learning to enable the agent to solve that task through exploration.

Imitation Learning Meta Reinforcement Learning +2

SAFARI: Safe and Active Robot Imitation Learning with Imagination

no code implementations18 Nov 2020 Norman Di Palo, Edward Johns

We empirically demonstrate how this method increases the performance on a set of manipulation tasks with respect to passive Imitation Learning, by gathering more informative demonstrations and by minimizing state-distribution shift at test time.

Active Learning Behavioural cloning

Benchmarking Domain Randomisation for Visual Sim-to-Real Transfer

no code implementations13 Nov 2020 Raghad Alghonaim, Edward Johns

Domain randomisation is a very popular method for visual sim-to-real transfer in robotics, due to its simplicity and ability to achieve transfer without any real-world images at all.

Benchmarking Pose Estimation

Crossing The Gap: A Deep Dive into Zero-Shot Sim-to-Real Transfer for Dynamics

no code implementations15 Aug 2020 Eugene Valassakis, Zihan Ding, Edward Johns

Zero-shot sim-to-real transfer of tasks with complex dynamics is a highly challenging and unsolved problem.

Physics-Based Dexterous Manipulations with Estimated Hand Poses and Residual Reinforcement Learning

no code implementations7 Aug 2020 Guillermo Garcia-Hernando, Edward Johns, Tae-Kyun Kim

Dexterous manipulation of objects in virtual environments with our bare hands, by using only a depth sensor and a state-of-the-art 3D hand pose estimator (HPE), is challenging.

3D Hand Pose Estimation Imitation Learning +2

Shape Adaptor: A Learnable Resizing Module

1 code implementation ECCV 2020 Shikun Liu, Zhe Lin, Yilin Wang, Jianming Zhang, Federico Perazzi, Edward Johns

We present a novel resizing module for neural networks: shape adaptor, a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided convolution.

Image Classification Neural Architecture Search +1

Constrained-Space Optimization and Reinforcement Learning for Complex Tasks

no code implementations1 Apr 2020 Ya-Yen Tsai, Bo Xiao, Edward Johns, Guang-Zhong Yang

The effectiveness of the proposed method is verified with a robotic suturing task, demonstrating that the learned policy outperformed the experts' demonstrations in terms of the smoothness of the joint motion and end-effector trajectories, as well as the overall task completion time.

reinforcement-learning Reinforcement Learning (RL)

Self-Supervised Generalisation with Meta Auxiliary Learning

4 code implementations NeurIPS 2019 Shikun Liu, Andrew J. Davison, Edward Johns

The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.

Auxiliary Learning Meta-Learning +1

End-to-End Multi-Task Learning with Attention

4 code implementations CVPR 2019 Shikun Liu, Edward Johns, Andrew J. Davison

Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task.

Multi-Task Learning

Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task

1 code implementation7 Jul 2017 Stephen James, Andrew J. Davison, Edward Johns

End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches.

Robotic Grasping Robot Manipulation

Self-Supervised Siamese Learning on Stereo Image Pairs for Depth Estimation in Robotic Surgery

no code implementations17 May 2017 Menglong Ye, Edward Johns, Ankur Handa, Lin Zhang, Philip Pratt, Guang-Zhong Yang

Robotic surgery has become a powerful tool for performing minimally invasive procedures, providing advantages in dexterity, precision, and 3D vision, over traditional surgery.

Depth Estimation Depth Prediction

3D Simulation for Robot Arm Control with Deep Q-Learning

no code implementations13 Sep 2016 Stephen James, Edward Johns

Building upon the recent success of deep Q-networks, we present an approach which uses 3D simulations to train a 7-DOF robotic arm in a control task without any prior knowledge.

Q-Learning

Deep Learning a Grasp Function for Grasping under Gripper Pose Uncertainty

no code implementations7 Aug 2016 Edward Johns, Stefan Leutenegger, Andrew J. Davison

With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function.

Robust Image Descriptors for Real-Time Inter-Examination Retargeting in Gastrointestinal Endoscopy

no code implementations18 May 2016 Menglong Ye, Edward Johns, Benjamin Walter, Alexander Meining, Guang-Zhong Yang

Despite successes with optical biopsy for in vivo and in situ tissue characterisation, biopsy retargeting for serial examinations is challenging because tissue may change in appearance between examinations.

PN-Net: Conjoined Triple Deep Network for Learning Local Image Descriptors

1 code implementation19 Jan 2016 Vassileios Balntas, Edward Johns, Lilian Tang, Krystian Mikolajczyk

We address this problem and propose a CNN based descriptor with improved matching performance, significantly reduced training and execution time, as well as low dimensionality.

Becoming the Expert - Interactive Multi-Class Machine Teaching

no code implementations CVPR 2015 Edward Johns, Oisin Mac Aodha, Gabriel J. Brostow

However, image-importance is individual-specific, i. e. a teaching image is important to a student if it changes their overall ability to discriminate between classes.

Cannot find the paper you are looking for? You can Submit a new open access paper.