no code implementations • EMNLP 2020 • Yumo Xu, Mirella Lapata
We consider the problem of better modeling query-cluster interactions to facilitate query focused multi-document summarization.
1 code implementation • 27 Feb 2024 • Huajian Zhang, Yumo Xu, Laura Perez-Beltrachini
We study existing approaches to leverage off-the-shelf Natural Language Inference (NLI) models for the evaluation of summary faithfulness and argue that these are sub-optimal due to the granularity level considered for premises and hypotheses.
2 code implementations • 23 May 2023 • Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, Arman Cohan
Motivated by this, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary.
1 code implementation • 26 Sep 2022 • Yumo Xu, Mirella Lapata
Extractive summarization produces summaries by identifying and concatenating the most important sentences in a document.
no code implementations • 14 Dec 2021 • Weijia Zhang, Svitlana Vakulenko, Thilina Rajapakse, Yumo Xu, Evangelos Kanoulas
In this dataset, answering the query requires document retrieval from a knowledge corpus.
no code implementations • 31 May 2021 • Yumo Xu, Mirella Lapata
The availability of large-scale datasets has driven the development of neural models that create summaries from single documents, for generic purposes.
1 code implementation • ACL 2021 • Yumo Xu, Mirella Lapata
The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents.
no code implementations • 3 Jun 2020 • Yumo Xu, Chenguang Zhu, Baolin Peng, Michael Zeng
Dialog policy determines the next-step actions for agents and hence is central to a dialogue system.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Tom Sherborne, Yumo Xu, Mirella Lapata
Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data.
no code implementations • 6 Apr 2020 • Yumo Xu, Mirella Lapata
We consider the problem of better modeling query-cluster interactions to facilitate query focused multi-document summarization (QFS).
1 code implementation • TACL 2019 • Yumo Xu, Mirella Lapata
In this paper we introduce domain detection as a new natural language processing task.
1 code implementation • ACL 2018 • Yumo Xu, Shay B. Cohen
Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data.
Ranked #2 on Stock Market Prediction on stocknet (using extra training data)