2 code implementations • 6 Dec 2023 • Peng Sun, Bei Shi, Daiwei Yu, Tao Lin
Contemporary machine learning requires training large neural networks on massive datasets and thus faces the challenges of high computational demands.
no code implementations • NeurIPS 2021 • Yiming Gao, Bei Shi, Xueying Du, Liang Wang, Guangwei Chen, Zhenjie Lian, Fuhao Qiu, Guoan Han, Weixuan Wang, Deheng Ye, Qiang Fu, Wei Yang, Lanxiao Huang
Recently, many researchers have made successful progress in building the AI systems for MOBA-game-playing with deep reinforcement learning, such as on Dota 2 and Honor of Kings.
1 code implementation • 29 Dec 2020 • Zihao Fu, Wai Lam, Anthony Man-Cho So, Bei Shi
The experimental results show that our theoretical framework is applicable in general generation models and our proposed rebalanced encoding approach alleviates the repetition problem significantly.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zihao Fu, Bei Shi, Lidong Bing, Wai Lam
In our architecture, we reconstruct KB triples or texts via a closed-loop framework via linking a generator and an extractor.
no code implementations • NeurIPS 2020 • Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu
However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i. e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes.
no code implementations • 25 Nov 2020 • Deheng Ye, Guibin Chen, Peilin Zhao, Fuhao Qiu, Bo Yuan, Wen Zhang, Sheng Chen, Mingfei Sun, Xiaoqian Li, Siqin Li, Jing Liang, Zhenjie Lian, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang
Unlike prior attempts, we integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
1 code implementation • EMNLP 2020 • Zihao Fu, Bei Shi, Wai Lam, Lidong Bing, Zhiyuan Liu
This kind of data is much easier to obtain since it can be produced automatically.
no code implementations • 20 Dec 2019 • Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, Qiaobo Chen, Yinyuting Yin, Hao Zhang, Tengfei Shi, Liang Wang, Qiang Fu, Wei Yang, Lanxiao Huang
We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games.
1 code implementation • ICLR 2019 • Meng Fang, Cheng Zhou, Bei Shi, Boqing Gong, Jia Xu, Tong Zhang
Dealing with sparse rewards is one of the most important challenges in reinforcement learning (RL), especially when a goal is dynamic (e. g., to grasp a moving object).
no code implementations • ACL 2018 • Bei Shi, Zihao Fu, Lidong Bing, Wai Lam
Given reviews from different domains, some existing methods for word embeddings exploit sentiment information, but they cannot produce domain-sensitive embeddings.
2 code implementations • ACL 2018 • Xin Li, Lidong Bing, Wai Lam, Bei Shi
Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer.
Ranked #19 on Aspect-Based Sentiment Analysis (ABSA) on SemEval-2014 Task-4 (Laptop (Acc) metric)
no code implementations • 21 Jun 2017 • Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, Kwun Ping Lai
Word embedding models such as Skip-gram learn a vector-space representation for each word, based on the local word collocation patterns that are observed in a text corpus.