Search Results for author: Jesse Zhang

Found 20 papers, 6 papers with code

EXTRACT: Efficient Policy Learning by Extracting Transferable Robot Skills from Offline Data

no code implementations25 Jun 2024 Jesse Zhang, Minho Heo, Zuxin Liu, Erdem Biyik, Joseph J Lim, Yao Liu, Rasool Fakoor

Prior work in skill-based RL either requires expert supervision to define useful skills, which is hard to scale, or learns a skill-space from offline data with heuristics that limit the adaptability of the skills, making them difficult to transfer during downstream RL.

Reinforcement Learning (RL) Robot Manipulation

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

no code implementations6 Feb 2024 YuFei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Biyik, David Held, Zackory Erickson

Reward engineering has long been a challenge in Reinforcement Learning (RL) research, as it often requires extensive human effort and iterative processes of trial-and-error to design effective reward functions.

reinforcement-learning Reinforcement Learning (RL)

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers

no code implementations14 Dec 2023 Taewook Nam, Juyong Lee, Jesse Zhang, Sung Ju Hwang, Joseph J. Lim, Karl Pertsch

We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human feedback.

Language Modelling reinforcement-learning +2

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

no code implementations16 Oct 2023 Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim

Instead, our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set.

Language Modelling Large Language Model

TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models

no code implementations9 Oct 2023 Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor

Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques -- e. g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to adapt large pretrained models for new tasks with limited demonstration data.

Continual Learning Imitation Learning +1

SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling

no code implementations20 Jun 2023 Jesse Zhang, Karl Pertsch, Jiahui Zhang, Joseph J. Lim

Pre-training robot policies with a rich set of skills can substantially accelerate the learning of downstream tasks.

Hierarchical Neural Program Synthesis

no code implementations9 Mar 2023 Linghan Zhong, Ryan Lindeborg, Jesse Zhang, Joseph J. Lim, Shao-Hua Sun

Then, we train a high-level module to comprehend the task specification (e. g., input/output pairs or demonstrations) from long programs and produce a sequence of task embeddings, which are then decoded by the program decoder and composed to yield the synthesized program.

Decoder Program Synthesis

Learning to Synthesize Programs as Interpretable and Generalizable Policies

1 code implementation NeurIPS 2021 Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, Joseph J. Lim

To alleviate the difficulty of learning to compose programs to induce the desired agent behavior from scratch, we propose to first learn a program embedding space that continuously parameterizes diverse behaviors in an unsupervised manner and then search over the learned program embedding space to yield a program that maximizes the return for a given task.

Deep Reinforcement Learning Program Synthesis

Hierarchical Reinforcement Learning By Discovering Intrinsic Options

1 code implementation ICLR 2021 Jesse Zhang, Haonan Yu, Wei Xu

We propose a hierarchical reinforcement learning method, HIDIO, that can learn task-agnostic options in a self-supervised manner while jointly learning to utilize them to solve sparse-reward tasks.

Hierarchical Reinforcement Learning reinforcement-learning +2

COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning

1 code implementation27 Oct 2020 Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine

Reinforcement learning has been applied to a wide variety of robotics problems, but most of such applications involve collecting data from scratch for each new task.

reinforcement-learning Reinforcement Learning (RL)

TRECVID 2019: An Evaluation Campaign to Benchmark Video Activity Detection, Video Captioning and Matching, and Video Search & Retrieval

no code implementations21 Sep 2020 George Awad, Asad A. Butt, Keith Curtis, Yooyoung Lee, Jonathan Fiscus, Afzal Godil, Andrew Delgado, Jesse Zhang, Eliot Godard, Lukas Diduch, Alan F. Smeaton, Yvette Graham, Wessel Kraaij, Georges Quenot

The TREC Video Retrieval Evaluation (TRECVID) 2019 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in research and development of content-based exploitation and retrieval of information from digital video via open, metrics-based evaluation.

Action Detection Activity Detection +5

Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings

1 code implementation ICML 2020 Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, Dinesh Jayaraman

Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment.

reinforcement-learning Reinforcement Learning +1

Unsupervised Projection Networks for Generative Adversarial Networks

no code implementations30 Sep 2019 Daiyaan Arfeen, Jesse Zhang

We propose the use of unsupervised learning to train projection networks that project onto the latent space of an already trained generator.

Clustering Image Super-Resolution

Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents

no code implementations25 Sep 2019 Jesse Zhang, Brian Cheung, Chelsea Finn, Dinesh Jayaraman, Sergey Levine

We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?

Domain Adaptation Meta Reinforcement Learning +2

REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning

no code implementations17 May 2019 Brian Yang, Jesse Zhang, Vitchyr Pong, Sergey Levine, Dinesh Jayaraman

We envision REPLAB as a framework for reproducible research across manipulation tasks, and as a step in this direction, we define a template for a grasping benchmark consisting of a task definition, evaluation protocol, performance measures, and a dataset of 92k grasp attempts.

Benchmarking Deep Reinforcement Learning +2

Porcupine Neural Networks: Approximating Neural Network Landscapes

no code implementations NeurIPS 2018 Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

A Spectral Approach to Generalization and Optimization in Neural Networks

no code implementations ICLR 2018 Farzan Farnia, Jesse Zhang, David Tse

The recent success of deep neural networks stems from their ability to generalize well on real data; however, Zhang et al. have observed that neural networks can easily overfit random labels.

Porcupine Neural Networks: (Almost) All Local Optima are Global

1 code implementation5 Oct 2017 Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.