Search Results for author: Corey Lynch

Found 16 papers, 5 papers with code

Interactive Language: Talking to Robots in Real Time

no code implementations12 Oct 2022 Corey Lynch, Ayzaan Wahid, Jonathan Tompson, Tianli Ding, James Betker, Robert Baruch, Travis Armstrong, Pete Florence

We present a framework for building interactive, real-time, natural language-instructable robots in the real world, and we open source related assets (dataset, environment, benchmark, and policies).

Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations

no code implementations12 May 2022 Negin Heravi, Ayzaan Wahid, Corey Lynch, Pete Florence, Travis Armstrong, Jonathan Tompson, Pierre Sermanet, Jeannette Bohg, Debidatta Dwibedi

Our self-supervised representations are learned by observing the agent freely interacting with different parts of the environment and is queried in two different settings: (i) policy learning and (ii) object location prediction.

Object Localization Representation Learning +1

BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning

no code implementations4 Feb 2022 Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn

In this paper, we study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks, a long-standing challenge in robot learning.

Imitation Learning

Implicit Behavioral Cloning

4 code implementations1 Sep 2021 Pete Florence, Corey Lynch, Andy Zeng, Oscar Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, Jonathan Tompson

We find that across a wide range of robot policy learning scenarios, treating supervised policy learning with an implicit model generally performs better, on average, than commonly used explicit models.


Broadly-Exploring, Local-Policy Trees for Long-Horizon Task Planning

no code implementations13 Oct 2020 Brian Ichter, Pierre Sermanet, Corey Lynch

This task space can be quite general and abstract; its only requirements are to be sampleable and to well-cover the space of useful tasks.

Motion Planning

Learning to Play by Imitating Humans

no code implementations11 Jun 2020 Rostam Dinyari, Pierre Sermanet, Corey Lynch

Acquiring multiple skills has commonly involved collecting a large number of expert demonstrations per task or engineering custom reward functions.

Language Conditioned Imitation Learning over Unstructured Data

no code implementations15 May 2020 Corey Lynch, Pierre Sermanet

Prior work in imitation learning typically requires each task be specified with a task id or goal image -- something that is often impractical in open-world environments.

Continuous Control Imitation Learning +2

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

1 code implementation25 Oct 2019 Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, Karol Hausman

We present relay policy learning, a method for imitation and reinforcement learning that can solve multi-stage, long-horizon robotic tasks.

Imitation Learning reinforcement-learning +1

Online Object Representations with Contrastive Learning

no code implementations10 Jun 2019 Sören Pirk, Mohi Khansari, Yunfei Bai, Corey Lynch, Pierre Sermanet

We propose a self-supervised approach for learning representations of objects from monocular videos and demonstrate it is particularly useful in situated settings such as robotics.

Contrastive Learning

Wasserstein Dependency Measure for Representation Learning

no code implementations NeurIPS 2019 Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron van den Oord, Sergey Levine, Pierre Sermanet

Mutual information maximization has emerged as a powerful learning objective for unsupervised representation learning obtaining state-of-the-art performance in applications such as object recognition, speech recognition, and reinforcement learning.

Object Recognition reinforcement-learning +5

Learning Latent Plans from Play

1 code implementation5 Mar 2019 Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet

Learning from play (LfP) offers three main advantages: 1) It is cheap.


Learning Actionable Representations from Visual Observations

no code implementations2 Aug 2018 Debidatta Dwibedi, Jonathan Tompson, Corey Lynch, Pierre Sermanet

In this work we explore a new approach for robots to teach themselves about the world simply by observing it.

Continuous Control

Time-Contrastive Networks: Self-Supervised Learning from Video

6 code implementations23 Apr 2017 Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine

While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human.

Metric Learning reinforcement-learning +3

Images Don't Lie: Transferring Deep Visual Semantic Features to Large-Scale Multimodal Learning to Rank

no code implementations20 Nov 2015 Corey Lynch, Kamelia Aryafar, Josh Attenberg

As a result, the task of ranking search results automatically (learning to rank) is a multibillion dollar machine learning problem.


Cannot find the paper you are looking for? You can Submit a new open access paper.