1 code implementation • 23 Nov 2019 • Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu
In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.
Ranked #12 on Text Summarization on GigaWord
1 code implementation • NAACL 2021 • Kaiqiang Song, Bingqing Wang, Zhe Feng, Fei Liu
We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs.
Ranked #10 on Text Summarization on GigaWord
1 code implementation • EMNLP (newsum) 2021 • Logan Lebanoff, Bingqing Wang, Zhe Feng, Fei Liu
In this paper, we model the cross-document endorsement effect and its utilization in multiple document summarization.
no code implementations • 30 Aug 2023 • Anthony Colas, Jun Araki, Zhengyu Zhou, Bingqing Wang, Zhe Feng
Explanations accompanied by a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user's confidence and trust in the system.
no code implementations • 8 Dec 2023 • Mobashir Sadat, Zhengyu Zhou, Lukas Lange, Jun Araki, Arsalan Gundroo, Bingqing Wang, Rakesh R Menon, Md Rizwan Parvez, Zhe Feng
Hallucination is a well-known phenomenon in text generated by large language models (LLMs).