no code implementations • 15 Jun 2022 • Wei Fu, Chao Yu, Zelai Xu, Jiaqi Yang, Yi Wu
Despite all the advantages, we revisit these two principles and show that in certain scenarios, e. g., environments with a highly multi-modal reward landscape, value decomposition, and parameter sharing can be problematic and lead to undesired outcomes.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 3 Apr 2022 • Hao Wang, Tai-Wei Chang, Tianqiao Liu, Jianmin Huang, Zhichao Chen, Chao Yu, Ruopeng Li, Wei Chu
In this paper, we theoretically demonstrate that ESMM suffers from the following two problems: (1) Inherent Estimation Bias (IEB), where the estimated CVR of ESMM is inherently higher than the ground truth; (2) Potential Independence Priority (PIP) for CTCVR estimation, where there is a risk that the ESMM overlooks the causality from click to conversion.
no code implementations • 2 Apr 2022 • Chao Yu, Yi Shen, Yue Mao, Longjun Cai
Hierarchical Text Classification (HTC) is a challenging task where a document can be assigned to multiple hierarchically structured categories within a taxonomy.
no code implementations • 28 Mar 2022 • Jie Li, Chao Yu, Yan Luo, Yifei Sun, Rui Wang
Relying on the passive sensing system, a dataset of received signals, where three types of hand gestures are sensed, is collected by using Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) paths as the reference channel respectively.
no code implementations • 18 Dec 2021 • Mu Jin, Zhihao Ma, Kebing Jin, Hankz Hankui Zhuo, Chen Chen, Chao Yu
Despite of achieving great success in real life, Deep Reinforcement Learning (DRL) is still suffering from three critical issues, which are data efficiency, lack of the interpretability and transferability.
no code implementations • 12 Dec 2021 • Weilin Liu, Ye Mu, Chao Yu, Xuefei Ning, Zhong Cao, Yi Wu, Shuang Liang, Huazhong Yang, Yu Wang
These scenarios indeed correspond to the vulnerabilities of the under-test driving policies, thus are meaningful for their further improvements.
no code implementations • 18 Nov 2021 • Xuejing Zheng, Chao Yu, Chen Chen, Jianye Hao, Hankz Hankui Zhuo
In this paper, we propose Lifelong reinforcement learning with Sequential linear temporal logic formulas and Reward Machines (LSRM), which enables an agent to leverage previously learned knowledge to fasten learning of logically specified tasks.
1 code implementation • NeurIPS 2021 • Zifan Wu, Chao Yu, Deheng Ye, Junge Zhang, Haiyin Piao, Hankz Hankui Zhuo
We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting.
no code implementations • 12 Oct 2021 • Chao Yu, Xinyi Yang, Jiaxuan Gao, Huazhong Yang, Yu Wang, Yi Wu
In this paper, we extend the state-of-the-art single-agent visual navigation method, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based planning module, Multi-agent Spatial Planner (MSP). MSP leverages a transformer-based architecture, Spatial-TeamFormer, which effectively captures spatial relations and intra-agent interactions via hierarchical spatial self-attentions.
no code implementations • 9 May 2021 • Sihang Chen, Weiqi Luo, Chao Yu
In recent years, quantitative investment methods combined with artificial intelligence have attracted more and more attention from investors and researchers.
no code implementations • 10 Mar 2021 • Zheng-Ping Li, Jun-Tian Ye, Xin Huang, Peng-Yu Jiang, Yuan Cao, Yu Hong, Chao Yu, Jun Zhang, Qiang Zhang, Cheng-Zhi Peng, Feihu Xu, Jian-Wei Pan
Long-range active imaging has widespread applications in remote sensing and target recognition.
2 code implementations • ICLR 2021 • Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Du, Yu Wang, Yi Wu
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games.
6 code implementations • 2 Mar 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu
Proximal Policy Optimization (PPO) is a popular on-policy reinforcement learning algorithm but is significantly less utilized than off-policy learning algorithms in multi-agent settings.
no code implementations • 4 Jan 2021 • Yue Mao, Yi Shen, Chao Yu, Longjun Cai
Some recent work focused on solving a combination of two subtasks, e. g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely.
Ranked #2 on
Aspect Sentiment Triplet Extraction
on SemEval
Aspect-oriented Opinion Extraction
Aspect Sentiment Triplet Extraction
+3
no code implementations • 1 Jan 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu
We benchmark commonly used multi-agent deep reinforcement learning (MARL) algorithms on a variety of cooperative multi-agent games.
no code implementations • 24 May 2020 • Wenwu Xie, Jian Xiao, Jinxia Yang, Xin Peng, Chao Yu, Peng Zhu
Since the signal with strong power should be demodulated first for successive interference cancellation (SIC) demodulation in non-orthogonal multiple access (NOMA) systems, the base station (BS) should inform the near user terminal (UT), which has allocated higher power, of modulation mode of the far user terminal.
no code implementations • 10 Nov 2019 • Chao Yu, Zhiguo Su
In this paper, a novel neural network activation function, called Symmetrical Gaussian Error Linear Unit (SGELU), is proposed to obtain high performance.
no code implementations • 22 Aug 2019 • Chao Yu, Jiming Liu, Shamim Nemati
As a subfield of machine learning, reinforcement learning (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback.
no code implementations • 10 Nov 2018 • Chao Yu, Tianpei Yang, Wenxuan Zhu, Dongxu Wang, Guangliang Li
Providing reinforcement learning agents with informationally rich human knowledge can dramatically improve various aspects of learning.
no code implementations • 9 Nov 2018 • Chao Yu
We then propose a hierarchical supervision framework to explicitly model the PoG, and define step by step how to realize the core principle of the framework and compute the optimal PoG for a control problem.
2 code implementations • 22 Sep 2018 • Chao Yu, Zuxin Liu, Xinjun Liu, Fugui Xie, Yi Yang, Qi Wei, Qiao Fei
It is one of the state-of-the-art SLAM systems in high-dynamic environments.
Robotics