Search Results for author: Lisheng Wu

Found 7 papers, 2 papers with code

Goal Exploration via Adaptive Skill Distribution for Goal-Conditioned Reinforcement Learning

no code implementations19 Apr 2024 Lisheng Wu, Ke Chen

Exploration efficiency poses a significant challenge in goal-conditioned reinforcement learning (GCRL) tasks, particularly those with long horizons and sparse rewards.

Bias Resilient Multi-Step Off-Policy Goal-Conditioned Reinforcement Learning

no code implementations29 Nov 2023 Lisheng Wu, Ke Chen

In goal-conditioned reinforcement learning (GCRL), sparse rewards present significant challenges, often obstructing efficient learning.

reinforcement-learning

Multi-View Reinforcement Learning

1 code implementation NeurIPS 2019 Minne Li, Lisheng Wu, Haitham Bou Ammar, Jun Wang

This paper is concerned with multi-view reinforcement learning (MVRL), which allows for decision making when agents share common dynamics but adhere to different observation models.

Decision Making reinforcement-learning +1

Learning Shared Dynamics with Meta-World Models

no code implementations5 Nov 2018 Lisheng Wu, Minne Li, Jun Wang

Humans have consciousness as the ability to perceive events and objects: a mental model of the world developed from the most impoverished of visual stimuli, enabling humans to make rapid decisions and take actions.

Atari Games Multi-Task Learning

Learning to Communicate Implicitly By Actions

no code implementations10 Oct 2018 Zheng Tian, Shihao Zou, Ian Davies, Tim Warr, Lisheng Wu, Haitham Bou Ammar, Jun Wang

The auxiliary reward for communication is integrated into the learning of the policy module.

Unsupervised Deep Domain Adaptation for Pedestrian Detection

no code implementations9 Feb 2018 Lihang Liu, Weiyao Lin, Lisheng Wu, Yong Yu, Michael Ying Yang

This paper addresses the problem of unsupervised domain adaptation on the task of pedestrian detection in crowded scenes.

Pedestrian Detection Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.