Search Results for author: Liyan Xu

Found 17 papers, 7 papers with code

Graph Representation of Narrative Context: Coherence Dependency via Retrospective Questions

no code implementations21 Feb 2024 Liyan Xu, Jiangnan Li, Mo Yu, Jie zhou

This work introduces a novel and practical paradigm for narrative comprehension, stemming from the observation that individual passages within narratives are often cohesively related than being isolated.

Retrieval

Previously on the Stories: Recap Snippet Identification for Story Reading

no code implementations11 Feb 2024 Jiangnan Li, Qiujing Wang, Liyan Xu, Wenjie Pang, Mo Yu, Zheng Lin, Weiping Wang, Jie zhou

Similar to the "previously-on" scenes in TV shows, recaps can help book reading by recalling the readers' memory about the important elements in previous texts to better understand the ongoing plot.

SIG: Speaker Identification in Literature via Prompt-Based Generation

1 code implementation22 Dec 2023 Zhenlin Su, Liyan Xu, Jin Xu, Jiangnan Li, Mingdu Huangfu

Identifying speakers of quotations in narratives is an important task in literary analysis, with challenging scenarios including the out-of-domain inference for unseen speakers, and non-explicit cases where there are no speaker mentions in surrounding context.

Speaker Identification

Towards Open-World Product Attribute Mining: A Lightly-Supervised Approach

1 code implementation26 May 2023 Liyan Xu, Chenwei Zhang, Xian Li, Jingbo Shang, Jinho D. Choi

We present a new task setting for attribute mining on e-commerce products, serving as a practical solution to extract open-world attributes without extensive human intervention.

Attribute

Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind

1 code implementation9 Nov 2022 Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie zhou

Our dataset consists of ~1, 000 parsed movie scripts, each corresponding to a few-shot character understanding task that requires models to mimic humans' ability of fast digesting characters with a few starting scenes in a new movie.

Meta-Learning Metric Learning

Online Coreference Resolution for Dialogue Processing: Improving Mention-Linking on Real-Time Conversations

no code implementations *SEM (NAACL) 2022 Liyan Xu, Jinho D. Choi

This paper suggests a direction of coreference resolution for online decoding on actively generated input such as dialogue, where the model accepts an utterance and its past context, then finds mentions in the current utterance as well as their referents, upon each dialogue turn.

coreference-resolution

Improving Downstream Task Performance by Treating Numbers as Entities

no code implementations7 May 2022 Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin

Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.

Classification Question Answering

Modeling Task Interactions in Document-Level Joint Entity and Relation Extraction

no code implementations NAACL 2022 Liyan Xu, Jinho D. Choi

We target on the document-level relation extraction in an end-to-end setting, where the model needs to jointly perform mention extraction, coreference resolution (COREF) and relation extraction (RE) at once, and gets evaluated in an entity-centric way.

coreference-resolution Document-level Relation Extraction +2

Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-sentence Dependency Graph

1 code implementation1 Dec 2021 Liyan Xu, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao, Jinho D. Choi

We target the task of cross-lingual Machine Reading Comprehension (MRC) in the direct zero-shot setting, by incorporating syntactic features from Universal Dependencies (UD), and the key features we use are the syntactic relations within each sentence.

Machine Reading Comprehension Sentence

ELIT: Emory Language and Information Toolkit

1 code implementation8 Sep 2021 Han He, Liyan Xu, Jinho D. Choi

We introduce ELIT, the Emory Language and Information Toolkit, which is a comprehensive NLP framework providing transformer-based end-to-end models for core tasks with a special focus on memory efficiency while maintaining state-of-the-art accuracy and speed.

AMR Parsing Constituency Parsing +9

Adapted End-to-End Coreference Resolution System for Anaphoric Identities in Dialogues

no code implementations ACL (CODI, CRAC) 2021 Liyan Xu, Jinho D. Choi

We present an effective system adapted from the end-to-end neural coreference resolution model, targeting on the task of anaphora resolution in dialogues.

coreference-resolution Transfer Learning

Boosting Cross-Lingual Transfer via Self-Learning with Uncertainty Estimation

1 code implementation EMNLP 2021 Liyan Xu, Xuchao Zhang, Xujiang Zhao, Haifeng Chen, Feng Chen, Jinho D. Choi

Recent multilingual pre-trained language models have achieved remarkable zero-shot performance, where the model is only finetuned on one source language and directly evaluated on target languages.

Cross-Lingual Transfer named-entity-recognition +4

Revealing the Myth of Higher-Order Inference in Coreference Resolution

1 code implementation EMNLP 2020 Liyan Xu, Jinho D. Choi

We find that given a high-performing encoder such as SpanBERT, the impact of HOI is negative to marginal, providing a new perspective of HOI to this task.

Avg Clustering +2

Noise Pollution in Hospital Readmission Prediction: Long Document Classification with Reinforcement Learning

no code implementations WS 2020 Liyan Xu, Julien Hogan, Rachel E. Patzer, Jinho D. Choi

This paper presents a reinforcement learning approach to extract noise in long clinical documents for the task of readmission prediction after kidney transplant.

Document Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.