Search Results for author: Yajuan Lyu

Found 31 papers, 9 papers with code

Dynamic Multistep Reasoning based on Video Scene Graph for Video Question Answering

no code implementations NAACL 2022 Jianguo Mao, Wenbin Jiang, Xiangdong Wang, Zhifan Feng, Yajuan Lyu, Hong Liu, Yong Zhu

Then, it performs multistep reasoning for better answer decision between the representations of the question and the video, and dynamically integrate the reasoning results.

Question Answering Video Question Answering +1

A Transition-based Method for Complex Question Understanding

no code implementations COLING 2022 Yu Xia, Wenbin Jiang, Yajuan Lyu, Sujian Li

Existing works are based on end-to-end neural models which do not explicitly model the intermediate states and lack interpretability for the parsing process.

EmRel: Joint Representation of Entities and Embedded Relations for Multi-triple Extraction

1 code implementation NAACL 2022 Benfeng Xu, Quan Wang, Yajuan Lyu, Yabing Shi, Yong Zhu, Jie Gao, Zhendong Mao

Multi-triple extraction is a challenging task due to the existence of informative inter-triple correlations, and consequently rich interactions across the constituent entities and relations. While existing works only explore entity representations, we propose to explicitly introduce relation representation, jointly represent it with entities, and novelly align them to identify valid triples. We perform comprehensive experiments on document-level relation extraction and joint entity and relation extraction along with ablations to demonstrate the advantage of the proposed method.

Document-level Relation Extraction Joint Entity and Relation Extraction +2

Learn and Review: Enhancing Continual Named Entity Recognition via Reviewing Synthetic Samples

no code implementations Findings (ACL) 2022 Yu Xia, Quan Wang, Yajuan Lyu, Yong Zhu, Wenhao Wu, Sujian Li, Dai Dai

However, the existing method depends on the relevance between tasks and is prone to inter-type confusion. In this paper, we propose a novel two-stage framework Learn-and-Review (L&R) for continual NER under the type-incremental setting to alleviate the above issues. Specifically, for the learning stage, we distill the old knowledge from teacher to a student on the current dataset.

Continual Named Entity Recognition named-entity-recognition +2

$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference

1 code implementation24 Mar 2023 Benfeng Xu, Quan Wang, Zhendong Mao, Yajuan Lyu, Qiaoqiao She, Yongdong Zhang

In-Context Learning (ICL), which formulates target tasks as prompt completion conditioned on in-context demonstrations, has become the prevailing utilization of LLMs.

In-Context Learning

CLOP: Video-and-Language Pre-Training with Knowledge Regularizations

no code implementations7 Nov 2022 Guohao Li, Hu Yang, Feng He, Zhifan Feng, Yajuan Lyu, Hua Wu, Haifeng Wang

To this end, we propose a Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge Regularizations.

Contrastive Learning Retrieval +1

Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation

no code implementations22 Oct 2022 Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu

Though model robustness has been extensively studied in language understanding, the robustness of Seq2Seq generation remains understudied.

Informativeness Text Generation

HiSMatch: Historical Structure Matching based Temporal Knowledge Graph Reasoning

no code implementations18 Oct 2022 Zixuan Li, Zhongni Hou, Saiping Guan, Xiaolong Jin, Weihua Peng, Long Bai, Yajuan Lyu, Wei Li, Jiafeng Guo, Xueqi Cheng

This is actually a matching task between a query and candidate entities based on their historical structures, which reflect behavioral trends of the entities at different timestamps.

Relation

Neural Knowledge Bank for Pretrained Transformers

no code implementations31 Jul 2022 Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui

The ability of pretrained Transformers to remember factual knowledge is essential but still limited for existing models.

Language Modelling Machine Translation +2

Mixture of Experts for Biomedical Question Answering

no code implementations15 Apr 2022 Damai Dai, Wenbin Jiang, Jiyuan Zhang, Weihua Peng, Yajuan Lyu, Zhifang Sui, Baobao Chang, Yong Zhu

In this paper, in order to alleviate the parameter competition problem, we propose a Mixture-of-Expert (MoE) based question answering method called MoEBQA that decouples the computation for different types of questions by sparse routing.

Question Answering

Complex Evolutional Pattern Learning for Temporal Knowledge Graph Reasoning

1 code implementation ACL 2022 Zixuan Li, Saiping Guan, Xiaolong Jin, Weihua Peng, Yajuan Lyu, Yong Zhu, Long Bai, Wei Li, Jiafeng Guo, Xueqi Cheng

Furthermore, these models are all trained offline, which cannot well adapt to the changes of evolutional patterns from then on.

Building Chinese Biomedical Language Models via Multi-Level Text Discrimination

1 code implementation14 Oct 2021 Quan Wang, Songtai Dai, Benfeng Xu, Yajuan Lyu, Yong Zhu, Hua Wu, Haifeng Wang

In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework.

Domain Adaptation

Link Prediction on N-ary Relational Facts: A Graph-based Approach

1 code implementation Findings (ACL) 2021 Quan Wang, Haifeng Wang, Yajuan Lyu, Yong Zhu

The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention.

Knowledge Graphs Link Prediction

Multi-view Classification Model for Knowledge Graph Completion

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Wenbin Jiang, Mengfei Guo, Yufeng Chen, Ying Li, Jinan Xu, Yajuan Lyu, Yong Zhu

This paper describes a novel multi-view classification model for knowledge graph completion, where multiple classification views are performed based on both content and context information for candidate triple evaluation.

Classification Knowledge Graph Completion

CoKE: Contextualized Knowledge Graph Embedding

3 code implementations6 Nov 2019 Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu

This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.

Knowledge Graph Embedding Link Prediction +1

Answer-focused and Position-aware Neural Question Generation

no code implementations EMNLP 2018 Xingwu Sun, Jing Liu, Yajuan Lyu, wei he, Yanjun Ma, Shi Wang

(2) The model copies the context words that are far from and irrelevant to the answer, instead of the words that are close and relevant to the answer.

Machine Reading Comprehension Position +3

Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading Comprehension Task

no code implementations WS 2018 An Yang, Kai Liu, Jing Liu, Yajuan Lyu, Sujian Li

Current evaluation metrics to question answering based machine reading comprehension (MRC) systems generally focus on the lexical overlap between the candidate and reference answers, such as ROUGE and BLEU.

Machine Reading Comprehension Question Answering

Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension

no code implementations ACL 2018 Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, Tian Wu

While sophisticated neural-based techniques have been developed in reading comprehension, most approaches model the answer in an independent manner, ignoring its relations with other answer candidates.

Answer Selection Reading Comprehension

Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification

no code implementations ACL 2018 Yizhong Wang, Kai Liu, Jing Liu, wei he, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang

Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine.

Machine Reading Comprehension Question Answering

DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications

3 code implementations WS 2018 Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yu-An Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, Haifeng Wang

Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.

Machine Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.