2 code implementations • EMNLP 2020 • Deming Ye, Yankai Lin, Jiaju Du, Zheng-Hao Liu, Peng Li, Maosong Sun, Zhiyuan Liu
Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning.
Ranked #31 on
Relation Extraction
on DocRED
2 code implementations • ACL 2020 • Houyu Zhang, Zheng-Hao Liu, Chenyan Xiong, Zhiyuan Liu
Human conversations naturally evolve around related concepts and scatter to multi-hop concepts.
no code implementations • 6 Nov 2019 • Deming Ye, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Maosong Sun
Multi-paragraph reasoning is indispensable for open-domain question answering (OpenQA), which receives less attention in the current OpenQA systems.
Ranked #58 on
Question Answering
on HotpotQA
4 code implementations • ACL 2019 • Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Lixin Huang, Jie zhou, Maosong Sun
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.
Ranked #59 on
Relation Extraction
on DocRED
no code implementations • 16 Apr 2019 • Yifan Qiao, Chenyan Xiong, Zheng-Hao Liu, Zhiyuan Liu
This paper studies the performances and behaviors of BERT in ranking tasks.
no code implementations • 19 Sep 2018 • Mu Yang, Zheng-Hao Liu, Ze-Di Cheng, Jin-Shi Xu, Chuan-Feng Li, Guang-Can Guo
A well-trained deep neural network is shown to gain capability of simultaneously restoring two kinds of images, which are completely destroyed by two distinct scattering medias respectively.