Search Results for author: Shangwen Lv

Found 8 papers, 5 papers with code

\textrm{DuReader}_{\textrm{vis}}: A Chinese Dataset for Open-domain Document Visual Question Answering

1 code implementation Findings (ACL) 2022 Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu

Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.

document understanding Open-Domain Question Answering +1

Integrating External Event Knowledge for Script Learning

no code implementations COLING 2020 Shangwen Lv, Fuqing Zhu, Songlin Hu

In the knowledge retrieval stage, we select relevant external event knowledge from ASER.

Retrieval

Pre-training Text Representations as Meta Learning

no code implementations12 Apr 2020 Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu

In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.

Language Modelling Meta-Learning +2

Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots

1 code implementation IJCNLP 2019 Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, Songlin Hu

Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information.

Conversational Response Selection Retrieval

Learning review representations from user and product level information for spam detection

no code implementations10 Sep 2019 Chunyuan Yuan, Wei Zhou, Qianwen Ma, Shangwen Lv, Jizhong Han, Songlin Hu

Then, we use orthogonal decomposition and fusion attention to learn a user, review, and product representation from the review information.

Spam detection

Conditional BERT Contextual Augmentation

5 code implementations17 Dec 2018 Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, Songlin Hu

BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model.

Data Augmentation Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.