no code implementations • NAACL 2022 • Nan Hu, Zirui Wu, Yuxuan Lai, Xiao Liu, Yansong Feng
Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables.
no code implementations • 15 Aug 2024 • Yuxuan Lai, Yupeng Wu, Yidan Wang, Wenpeng Hu, Chen Zheng
Specifically, we design prompts to guide LLMs to sequentially generate the title, abstract, hierarchical headings, and the main content of the literature survey.
no code implementations • 10 Aug 2024 • Jiuheng Lin, Yuxuan Lai, Yansong Feng
Conditional question answering (CQA) is an important task that aims to find probable answers and identify missing conditions.
1 code implementation • 13 Jan 2024 • Zhen Li, Xiaohan Xu, Tao Shen, Can Xu, Jia-Chen Gu, Yuxuan Lai, Chongyang Tao, Shuai Ma
In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e. g., coherence, creativity, and context relevance.
no code implementations • 27 Sep 2023 • Xiaowen Sun, Jiazhan Feng, Yuxuan Wang, Yuxuan Lai, Xingyu Shen, Dongyan Zhao
In this paper, we focus on the innovative dialog-to-image generation task, where the model synthesizes a high-resolution image aligned with the given dialog context as a response.
no code implementations • 25 Jul 2023 • Chen Zheng, huan zhang, Yan Zhao, Yuxuan Lai
To address these concerns, we propose a coherence scoring model consisting of a regression model with two feature extractors: a local coherence discriminative model and a punctuation correction model.
1 code implementation • 1 Jun 2023 • Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan Zhao
We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances.
1 code implementation • 26 Feb 2023 • Chen Zhang, Yuxuan Lai, Yansong Feng, Xingyu Shen, Haowei Du, Dongyan Zhao
We convert KB subgraphs into passages to narrow the gap between KB schemas and questions, which enables our model to benefit from recent advances in multilingual pre-trained language models (MPLMs) and cross-lingual machine reading comprehension (xMRC).
Cross-Lingual Question Answering
Machine Reading Comprehension
1 code implementation • Findings (ACL) 2022 • Qintong Li, Piji Li, Wei Bi, Zhaochun Ren, Yuxuan Lai, Lingpeng Kong
Open-ended text generation tasks, such as dialogue generation and story completion, require models to generate a coherent continuation given limited preceding context.
1 code implementation • Findings (EMNLP) 2021 • Chen Zhang, Yuxuan Lai, Yansong Feng, Dongyan Zhao
In this paper, we present a new verification style reading comprehension dataset named VGaokao from Chinese Language tests of Gaokao.
1 code implementation • ACL 2021 • Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai, Dongyan Zhao
Document-level Relation Extraction (RE) is a more challenging task than sentence RE as it often requires reasoning over multiple sentences.
Ranked #48 on
Relation Extraction
on DocRED
1 code implementation • Findings (ACL) 2021 • Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang, Dongyan Zhao
A thorough empirical analysis shows that MRC models tend to learn shortcut questions earlier than challenging questions, and the high proportions of shortcut questions in training sets hinder models from exploring the sophisticated reasoning skills in the later stage of training.
2 code implementations • NAACL 2021 • Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, Dongyan Zhao
Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.
no code implementations • 23 Jun 2020 • Zechang Li, Yuxuan Lai, Yansong Feng, Dongyan Zhao
In this paper, we propose a novel semantic parser for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
1 code implementation • 26 Nov 2019 • Yuan Ye, Yansong Feng, Bingfeng Luo, Yuxuan Lai, Dongyan Zhao
However, such models often make predictions for each entity pair individually, thus often fail to solve the inconsistency among different predictions, which can be characterized by discrete relation constraints.
1 code implementation • 2 Sep 2019 • Zechang Li, Yuxuan Lai, Yuxi Xie, Yansong Feng, Dongyan Zhao
The sketch is a high-level structure of the logical form exclusive of low-level details such as entities and predicates.
no code implementations • NAACL 2019 • Kun Xu, Yuxuan Lai, Yansong Feng, Zhiguo Wang
However, extending KV-MemNNs to Knowledge Based Question Answering (KB-QA) is not trivia, which should properly decompose a complex question into a sequence of queries against the memory, and update the query representations to support multi-hop reasoning over the memory.
1 code implementation • 25 Feb 2019 • Yuxuan Lai, Yansong Feng, Xiaohan Yu, Zheng Wang, Kun Xu, Dongyan Zhao
Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly.
no code implementations • ACL 2018 • Yanyan Jia, Yuan Ye, Yansong Feng, Yuxuan Lai, Rui Yan, Dongyan Zhao
Identifying long-span dependencies between discourse units is crucial to improve discourse parsing performance.