1 code implementation • Findings (ACL) 2022 • Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu
Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.
1 code implementation • Findings (ACL) 2021 • Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen
Recently, dense passage retrieval has become a mainstream approach to finding relevant information in various natural language processing tasks.
no code implementations • COLING 2020 • Shangwen Lv, Fuqing Zhu, Songlin Hu
In the knowledge retrieval stage, we select relevant external event knowledge from ASER.
no code implementations • 12 Apr 2020 • Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu
In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.
1 code implementation • IJCNLP 2019 • Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, Songlin Hu
Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information.
Ranked #5 on Conversational Response Selection on RRS
no code implementations • 10 Sep 2019 • Chunyuan Yuan, Wei Zhou, Qianwen Ma, Shangwen Lv, Jizhong Han, Songlin Hu
Then, we use orthogonal decomposition and fusion attention to learn a user, review, and product representation from the review information.
1 code implementation • 9 Sep 2019 • Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Songlin Hu
In this work, we propose to automatically extract evidence from heterogeneous knowledge sources, and answer questions based on the extracted evidence.
Ranked #14 on Common Sense Reasoning on CommonsenseQA
5 code implementations • 17 Dec 2018 • Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, Songlin Hu
BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model.