Search Results for author: Haoqi Yuan

Found 5 papers, 3 papers with code

RL-GPT: Integrating Reinforcement Learning and Code-as-policy

no code implementations29 Feb 2024 Shaoteng Liu, Haoqi Yuan, Minda Hu, Yanwei Li, Yukang Chen, Shu Liu, Zongqing Lu, Jiaya Jia

To seamlessly integrate both modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent.

reinforcement-learning Reinforcement Learning (RL)

Creative Agents: Empowering Agents with Imagination for Creative Tasks

1 code implementation5 Dec 2023 Chi Zhang, Penglin Cai, Yuhui Fu, Haoqi Yuan, Zongqing Lu

We benchmark creative tasks with the challenging open-world game Minecraft, where the agents are asked to create diverse buildings given free-form language instructions.

Instruction Following Language Modelling +1

Skill Reinforcement Learning and Planning for Open-World Long-Horizon Tasks

no code implementations29 Mar 2023 Haoqi Yuan, Chi Zhang, Hongcheng Wang, Feiyang Xie, Penglin Cai, Hao Dong, Zongqing Lu

Our method outperforms baselines by a large margin and is the most sample-efficient demonstration-free RL method to solve Minecraft Tech Tree tasks.

Multi-Task Learning reinforcement-learning +1

Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning

1 code implementation21 Jun 2022 Haoqi Yuan, Zongqing Lu

We study offline meta-reinforcement learning, a practical reinforcement learning paradigm that learns from offline data to adapt to new tasks.

Contrastive Learning Meta Reinforcement Learning +3

DLGAN: Disentangling Label-Specific Fine-Grained Features for Image Manipulation

1 code implementation22 Nov 2019 Guanqi Zhan, Yihao Zhao, Bingchan Zhao, Haoqi Yuan, Baoquan Chen, Hao Dong

By mapping the discrete label-specific attribute features into a continuous prior distribution, we leverage the advantages of both discrete labels and reference images to achieve image manipulation in a hybrid fashion.

Attribute Image Manipulation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.