no code implementations • NAACL 2022 • Jianguo Mao, Wenbin Jiang, Xiangdong Wang, Zhifan Feng, Yajuan Lyu, Hong Liu, Yong Zhu
Then, it performs multistep reasoning for better answer decision between the representations of the question and the video, and dynamically integrate the reasoning results.
no code implementations • NAACL (BioNLP) 2021 • Songtai Dai, Quan Wang, Yajuan Lyu, Yong Zhu
This paper presents our winning system at the Radiology Report Summarization track of the MEDIQA 2021 shared task.
no code implementations • COLING 2022 • Jianguo Mao, Jiyuan Zhang, Zengfeng Zeng, Weihua Peng, Wenbin Jiang, Xiangdong Wang, Hong Liu, Yajuan Lyu
It then performs dynamic reasoning based on the hierarchical representations of evidences to solve complex biomedical problems.
no code implementations • Findings (ACL) 2022 • Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, Dai Dai
However, the existing method depends on the relevance between tasks and is prone to inter-type confusion. In this paper, we propose a novel two-stage framework Learn-and-Review (L&R) for continual NER under the type-incremental setting to alleviate the above issues. Specifically, for the learning stage, we distill the old knowledge from teacher to a student on the current dataset.
Continual Named Entity Recognition named-entity-recognition +2
no code implementations • COLING 2022 • Yu Xia, Wenbin Jiang, Yajuan Lyu, Sujian Li
Existing works are based on end-to-end neural models which do not explicitly model the intermediate states and lack interpretability for the parsing process.
1 code implementation • NAACL 2022 • Benfeng Xu, Quan Wang, Yajuan Lyu, Yabing Shi, Yong Zhu, Jie Gao, Zhendong Mao
Multi-triple extraction is a challenging task due to the existence of informative inter-triple correlations, and consequently rich interactions across the constituent entities and relations. While existing works only explore entity representations, we propose to explicitly introduce relation representation, jointly represent it with entities, and novelly align them to identify valid triples. We perform comprehensive experiments on document-level relation extraction and joint entity and relation extraction along with ablations to demonstrate the advantage of the proposed method.
Document-level Relation Extraction Joint Entity and Relation Extraction +2
1 code implementation • 24 Mar 2023 • Benfeng Xu, Quan Wang, Zhendong Mao, Yajuan Lyu, Qiaoqiao She, Yongdong Zhang
In-Context Learning (ICL), which formulates target tasks as prompt completion conditioned on in-context demonstrations, has become the prevailing utilization of LLMs.
no code implementations • 7 Nov 2022 • Guohao Li, Hu Yang, Feng He, Zhifan Feng, Yajuan Lyu, Hua Wu, Haifeng Wang
To this end, we propose a Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge Regularizations.
no code implementations • 28 Oct 2022 • Wei Li, Xue Xu, Xinyan Xiao, Jiachen Liu, Hu Yang, Guohao Li, Zhanpeng Wang, Zhifan Feng, Qiaoqiao She, Yajuan Lyu, Hua Wu
Diffusion generative models have recently greatly improved the power of text-conditioned image generation.
no code implementations • 22 Oct 2022 • Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu
Though model robustness has been extensively studied in language understanding, the robustness of Seq2Seq generation remains understudied.
no code implementations • 18 Oct 2022 • Zixuan Li, Zhongni Hou, Saiping Guan, Xiaolong Jin, Weihua Peng, Long Bai, Yajuan Lyu, Wei Li, Jiafeng Guo, Xueqi Cheng
This is actually a matching task between a query and candidate entities based on their historical structures, which reflect behavioral trends of the entities at different timestamps.
no code implementations • 31 Jul 2022 • Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui
The ability of pretrained Transformers to remember factual knowledge is essential but still limited for existing models.
no code implementations • 15 Apr 2022 • Damai Dai, Wenbin Jiang, Jiyuan Zhang, Weihua Peng, Yajuan Lyu, Zhifang Sui, Baobao Chang, Yong Zhu
In this paper, in order to alleviate the parameter competition problem, we propose a Mixture-of-Expert (MoE) based question answering method called MoEBQA that decouples the computation for different types of questions by sparse routing.
1 code implementation • ACL 2022 • Zixuan Li, Saiping Guan, Xiaolong Jin, Weihua Peng, Yajuan Lyu, Yong Zhu, Long Bai, Wei Li, Jiafeng Guo, Xueqi Cheng
Furthermore, these models are all trained offline, which cannot well adapt to the changes of evolutional patterns from then on.
1 code implementation • 14 Oct 2021 • Quan Wang, Songtai Dai, Benfeng Xu, Yajuan Lyu, Yong Zhu, Hua Wu, Haifeng Wang
In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework.
1 code implementation • Findings (ACL) 2021 • Quan Wang, Haifeng Wang, Yajuan Lyu, Yong Zhu
The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention.
3 code implementations • 20 Feb 2021 • Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, Zhendong Mao
Our experiments demonstrate the usefulness of the proposed entity structure and the effectiveness of SSAN.
Ranked #3 on Relation Extraction on DocRED
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zhifan Feng, Qi Wang, Wenbin Jiang, Yajuan Lyu, Yong Zhu
Named entity disambiguation is an important task that plays the role of bridge between text and knowledge.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Wenbin Jiang, Mengfei Guo, Yufeng Chen, Ying Li, Jinan Xu, Yajuan Lyu, Yong Zhu
This paper describes a novel multi-view classification model for knowledge graph completion, where multiple classification views are performed based on both content and context information for candidate triple evaluation.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, Yong Zhu
Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels.
3 code implementations • 6 Nov 2019 • Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu
This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.
no code implementations • IJCNLP 2019 • Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao
Our method dynamically updates the representation of the knowledge according to the structural information of the constructed sub-graph.
no code implementations • AAAI-2019 2019 • Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, Haifeng Wang
Joint entity and relation extraction is to detect entity and relation using a single model.
Ranked #2 on Relation Extraction on NYT-single
1 code implementation • ACL 2019 • An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li
In this work, we investigate the potential of leveraging external knowledge bases (KBs) to further improve BERT for MRC.
no code implementations • EMNLP 2018 • Wei Li, Xinyan Xiao, Yajuan Lyu, Yuanzhuo Wang
Information selection is the most important component in document summarization task.
Ranked #32 on Abstractive Text Summarization on CNN / Daily Mail
no code implementations • EMNLP 2018 • Wei Li, Xinyan Xiao, Yajuan Lyu, Yuanzhuo Wang
Recent neural sequence-to-sequence models have shown significant progress on short text summarization.
Ranked #43 on Abstractive Text Summarization on CNN / Daily Mail
no code implementations • EMNLP 2018 • Xingwu Sun, Jing Liu, Yajuan Lyu, wei he, Yanjun Ma, Shi Wang
(2) The model copies the context words that are far from and irrelevant to the answer, instead of the words that are close and relevant to the answer.
no code implementations • WS 2018 • An Yang, Kai Liu, Jing Liu, Yajuan Lyu, Sujian Li
Current evaluation metrics to question answering based machine reading comprehension (MRC) systems generally focus on the lexical overlap between the candidate and reference answers, such as ROUGE and BLEU.
no code implementations • ACL 2018 • Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, Tian Wu
While sophisticated neural-based techniques have been developed in reading comprehension, most approaches model the answer in an independent manner, ignoring its relations with other answer candidates.
no code implementations • ACL 2018 • Yizhong Wang, Kai Liu, Jing Liu, wei he, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang
Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine.
Ranked #3 on Question Answering on MS MARCO
3 code implementations • WS 2018 • Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yu-An Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, Haifeng Wang
Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.