Search Results for author: Joseph J. Lim

Found 47 papers, 14 papers with code

IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks

1 code implementation17 Nov 2019 Youngwoon Lee, Edward S. Hu, Zhengyu Yang, Alex Yin, Joseph J. Lim

The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.

Industrial Robots reinforcement-learning +2

Accelerating Reinforcement Learning with Learned Skill Priors

2 code implementations22 Oct 2020 Karl Pertsch, Youngwoon Lee, Joseph J. Lim

We validate our approach, SPiRL (Skill-Prior RL), on complex navigation and robotic manipulation tasks and show that learned skill priors are essential for effective skill transfer from rich datasets.

reinforcement-learning Reinforcement Learning (RL)

FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation

1 code implementation22 May 2023 Minho Heo, Youngwoon Lee, Doohyun Lee, Joseph J. Lim

We benchmark the performance of offline RL and IL algorithms on our assembly tasks and demonstrate the need to improve such algorithms to be able to solve our tasks in the real world, providing ample opportunities for future research.

Imitation Learning Motion Planning +4

Learning to Coordinate Manipulation Skills via Skill Behavior Diversification

1 code implementation ICLR 2020 Youngwoon Lee, Jingyun Yang, Joseph J. Lim

When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together.

Learning to Synthesize Programs as Interpretable and Generalizable Policies

1 code implementation NeurIPS 2021 Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, Joseph J. Lim

To alleviate the difficulty of learning to compose programs to induce the desired agent behavior from scratch, we propose to first learn a program embedding space that continuously parameterizes diverse behaviors in an unsupervised manner and then search over the learned program embedding space to yield a program that maximizes the return for a given task.

Program Synthesis

Generalization to New Actions in Reinforcement Learning

2 code implementations ICML 2020 Ayush Jain, Andrew Szot, Joseph J. Lim

A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances, such as making decisions from new action choices.

reinforcement-learning Reinforcement Learning (RL) +1

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

2 code implementations NeurIPS 2019 Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim

Model-agnostic meta-learners aim to acquire meta-learned parameters from similar tasks to adapt to novel tasks from the same distribution with few gradient updates.

Few-Shot Image Classification Few-Shot Learning +3

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning

2 code implementations16 Sep 2016 Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, Ali Farhadi

To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine.

3D Reconstruction Feature Engineering +3

Policy Transfer across Visual and Dynamics Domain Gaps via Iterative Grounding

1 code implementation1 Jul 2021 Grace Zhang, Linghan Zhong, Youngwoon Lee, Joseph J. Lim

In this paper, we propose a novel policy transfer method with iterative "environment grounding", IDAPT, that alternates between (1) directly minimizing both visual and dynamics domain gaps by grounding the source environment in the target environment domains, and (2) training a policy on the grounded source environment.

Scaling simulation-to-real transfer by learning composable robot skills

1 code implementation26 Sep 2018 Ryan Julian, Eric Heiden, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

1 code implementation4 Oct 2018 Zhanpeng He, Ryan Julian, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph J. Lim, Gaurav Sukhatme, Karol Hausman

We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot.

Model Predictive Control Zero-shot Generalization

Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation

1 code implementation11 Nov 2021 I-Chun Arthur Liu, Shagun Uppal, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert, Youngwoon Lee

Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations.

Imitation Learning Motion Planning +3

3D Interpreter Networks for Viewer-Centered Wireframe Modeling

no code implementations3 Apr 2018 Jiajun Wu, Tianfan Xue, Joseph J. Lim, Yuandong Tian, Joshua B. Tenenbaum, Antonio Torralba, William T. Freeman

3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes.

Image Retrieval Keypoint Estimation +2

Unsupervised Visual-Linguistic Reference Resolution in Instructional Videos

no code implementations CVPR 2017 De-An Huang, Joseph J. Lim, Li Fei-Fei, Juan Carlos Niebles

We propose an unsupervised method for reference resolution in instructional videos, where the goal is to temporally link an entity (e. g., "dressing") to the action (e. g., "mix yogurt") that produced it.

Referring Expression

Single Image 3D Interpreter Network

1 code implementation29 Apr 2016 Jiajun Wu, Tianfan Xue, Joseph J. Lim, Yuandong Tian, Joshua B. Tenenbaum, Antonio Torralba, William T. Freeman

In this work, we propose 3D INterpreter Network (3D-INN), an end-to-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data.

Image Retrieval Keypoint Estimation +2

Auto-conditioned Recurrent Mixture Density Networks for Learning Generalizable Robot Skills

no code implementations29 Sep 2018 Hejia Zhang, Eric Heiden, Stefanos Nikolaidis, Joseph J. Lim, Gaurav S. Sukhatme

Personal robots assisting humans must perform complex manipulation tasks that are typically difficult to specify in traditional motion planning pipelines, where multiple objectives must be met and the high-level context be taken into consideration.

Motion Planning

Toward Multimodal Model-Agnostic Meta-Learning

no code implementations18 Dec 2018 Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim

One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from.

Few-Shot Image Classification Meta-Learning

Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning

no code implementations NeurIPS 2015 Jiajun Wu, Ilker Yildirim, Joseph J. Lim, Bill Freeman, Josh Tenenbaum

Humans demonstrate remarkable abilities to predict physical events in dynamic scenes, and to infer the physical properties of objects from static images.

Friction Scene Understanding

Sketch Tokens: A Learned Mid-level Representation for Contour and Object Detection

no code implementations CVPR 2013 Joseph J. Lim, C. L. Zitnick, Piotr Dollar

Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images.

Contour Detection object-detection +1

Looking Beyond the Visible Scene

no code implementations CVPR 2014 Aditya Khosla, Byoungkwon An An, Joseph J. Lim, Antonio Torralba

In this work, we propose to look beyond the visible elements of a scene; we demonstrate that a scene is not just a collection of objects and their configuration or the labels assigned to its pixels - it is so much more.

Scene Understanding

Discovering States and Transformations in Image Collections

no code implementations CVPR 2015 Phillip Isola, Joseph J. Lim, Edward H. Adelson

Our system works by generalizing across object classes: states and transformations learned on one set of objects are used to interpret the image collection for an entirely new object class.

Object

Program Guided Agent

no code implementations ICLR 2020 Shao-Hua Sun, Te-Lin Wu, Joseph J. Lim

Developing agents that can learn to follow natural language instructions has been an emerging research direction.

Zero-shot Generalization

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments

no code implementations22 Oct 2020 Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max Pflueger, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert

In contrast, motion planners use explicit models of the agent and environment to plan collision-free paths to faraway goals, but suffer from inaccurate models in tasks that require contacts with the environment.

reinforcement-learning Reinforcement Learning (RL) +1

Message Passing Adaptive Resonance Theory for Online Active Semi-supervised Learning

no code implementations2 Dec 2020 Taehyeong Kim, Injune Hwang, Hyundo Lee, Hyunseo Kim, Won-Seok Choi, Joseph J. Lim, Byoung-Tak Zhang

Active learning is widely used to reduce labeling effort and training time by repeatedly querying only the most beneficial samples from unlabeled data.

Active Learning

Demonstration-Guided Reinforcement Learning with Learned Skills

no code implementations ICLR Workshop SSL-RL 2021 Karl Pertsch, Youngwoon Lee, Yue Wu, Joseph J. Lim

Prior approaches for demonstration-guided RL treat every new task as an independent learning problem and attempt to follow the provided demonstrations step-by-step, akin to a human trying to imitate a completely unseen behavior by following the demonstrator's exact muscle movements.

reinforcement-learning Reinforcement Learning (RL) +1

Adversarial Skill Chaining for Long-Horizon Robot Manipulation via Terminal State Regularization

no code implementations15 Nov 2021 Youngwoon Lee, Joseph J. Lim, Anima Anandkumar, Yuke Zhu

However, these approaches require larger state distributions to be covered as more policies are sequenced, and thus are limited to short skill sequences.

Reinforcement Learning (RL) Robot Manipulation

Model-Agnostic Meta-Learning for Multimodal Task Distributions

no code implementations27 Sep 2018 Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim

In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates.

Few-Shot Image Classification Meta-Learning

Generalizing Reinforcement Learning to Unseen Actions

no code implementations25 Sep 2019 Ayush Jain*, Andrew Szot*, Jincheng Zhou, Joseph J. Lim

Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning.

Decision Making reinforcement-learning +3

Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction

no code implementations25 Sep 2019 Karl Pertsch, Oleh Rybkin, Jingyun Yang, Konstantinos G. Derpanis, Kostas Daniilidis, Joseph J. Lim, Andrew Jaegle

To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed.

Temporal Sequences

Task-Induced Representation Learning

no code implementations ICLR 2022 Jun Yamada, Karl Pertsch, Anisha Gunjal, Joseph J. Lim

We investigate the effectiveness of unsupervised and task-induced representation learning approaches on four visually complex environments, from Distracting DMControl to the CARLA driving simulator.

Contrastive Learning Imitation Learning +2

Skill-based Model-based Reinforcement Learning

no code implementations15 Jul 2022 Lucy Xiaoyang Shi, Joseph J. Lim, Youngwoon Lee

From this intuition, we propose a Skill-based Model-based RL framework (SkiMo) that enables planning in the skill space using a skill dynamics model, which directly predicts the skill outcomes, rather than predicting all small details in the intermediate states, step by step.

Model-based Reinforcement Learning reinforcement-learning +1

PATO: Policy Assisted TeleOperation for Scalable Robot Data Collection

no code implementations9 Dec 2022 Shivin Dass, Karl Pertsch, Hejia Zhang, Youngwoon Lee, Joseph J. Lim, Stefanos Nikolaidis

Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research.

Cross-Domain Transfer via Semantic Skill Imitation

no code implementations14 Dec 2022 Karl Pertsch, Ruta Desai, Vikash Kumar, Franziska Meier, Joseph J. Lim, Dhruv Batra, Akshara Rai

We propose an approach for semantic imitation, which uses demonstrations from a source domain, e. g. human videos, to accelerate reinforcement learning (RL) in a different target domain, e. g. a robotic manipulator in a simulated kitchen.

Reinforcement Learning (RL) Robot Manipulation

Efficient Multi-Task Reinforcement Learning via Selective Behavior Sharing

no code implementations1 Feb 2023 Grace Zhang, Ayush Jain, Injune Hwang, Shao-Hua Sun, Joseph J. Lim

The ability to leverage shared behaviors between tasks is critical for sample-efficient multi-task reinforcement learning (MTRL).

reinforcement-learning Reinforcement Learning (RL)

Hierarchical Neural Program Synthesis

no code implementations9 Mar 2023 Linghan Zhong, Ryan Lindeborg, Jesse Zhang, Joseph J. Lim, Shao-Hua Sun

Then, we train a high-level module to comprehend the task specification (e. g., input/output pairs or demonstrations) from long programs and produce a sequence of task embeddings, which are then decoded by the program decoder and composed to yield the synthesized program.

Program Synthesis

SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling

no code implementations20 Jun 2023 Jesse Zhang, Karl Pertsch, Jiahui Zhang, Joseph J. Lim

Pre-training robot policies with a rich set of skills can substantially accelerate the learning of downstream tasks.

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

no code implementations16 Oct 2023 Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim

Instead, our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set.

Language Modelling Large Language Model

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers

no code implementations14 Dec 2023 Taewook Nam, Juyong Lee, Jesse Zhang, Sung Ju Hwang, Joseph J. Lim, Karl Pertsch

We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human feedback.

Language Modelling reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.