Search Results for author: Yihuan Mao

Found 5 papers, 1 papers with code

MOORe: Model-based Offline-to-Online Reinforcement Learning

no code implementations25 Jan 2022 Yihuan Mao, Chao Wang, Bin Wang, Chongjie Zhang

With the success of offline reinforcement learning (RL), offline trained RL policies have the potential to be further improved when deployed online.

reinforcement-learning

SEIHAI: A Sample-efficient Hierarchical AI for the MineRL Competition

no code implementations17 Nov 2021 Hangyu Mao, Chao Wang, Xiaotian Hao, Yihuan Mao, Yiming Lu, Chengjie WU, Jianye Hao, Dong Li, Pingzhong Tang

The MineRL competition is designed for the development of reinforcement learning and imitation learning algorithms that can efficiently leverage human demonstrations to drastically reduce the number of environment interactions needed to solve the complex \emph{ObtainDiamond} task with sparse rewards.

Imitation Learning reinforcement-learning

LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression

no code implementations COLING 2020 Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai

BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks.

Knowledge Distillation Model Compression +1

CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark

3 code implementations CVPR 2019 Jiefeng Li, Can Wang, Hao Zhu, Yihuan Mao, Hao-Shu Fang, Cewu Lu

In this paper, we propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms.

Keypoint Detection Multi-Person Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.