no code implementations • 19 Dec 2022 • Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu
Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model.
1 code implementation • 2 Dec 2022 • Chao Zhao, Faeze Brahman, Kaiqiang Song, Wenlin Yao, Dian Yu, Snigdha Chaturvedi
To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset.
1 code implementation • 28 Oct 2022 • Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, Dong Yu
The problem is only exacerbated by a lack of segmentation in transcripts of audio/video recordings.
Ranked #5 on
Text Summarization
on Pubmed
1 code implementation • 22 Oct 2022 • Fei Wang, Kaiqiang Song, Hongming Zhang, Lifeng Jin, Sangwoo Cho, Wenlin Yao, Xiaoyang Wang, Muhao Chen, Dong Yu
Recent literature adds extractive summaries as guidance for abstractive summarization models to provide hints of salient content and achieves better performance.
Ranked #6 on
Abstractive Text Summarization
on CNN / Daily Mail
1 code implementation • ACL 2022 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Summarization of podcast transcripts is of practical benefit to both content providers and consumers.
1 code implementation • ACL 2022 • Chao Zhao, Wenlin Yao, Dian Yu, Kaiqiang Song, Dong Yu, Jianshu Chen
Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances, which are either scattered around or implicitly implied in different turns of conversations.
1 code implementation • NAACL 2021 • Kaiqiang Song, Bingqing Wang, Zhe Feng, Fei Liu
We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs.
Ranked #10 on
Text Summarization
on GigaWord
1 code implementation • 14 Feb 2021 • Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang
Our experiments show that CATE is beneficial to the downstream search, especially in the large search space.
no code implementations • 9 Nov 2020 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts to serve as input to the summarizer; (ii) striking a balance between the amount and quality of training instances; (iii) the appropriate summary length and start/end points.
1 code implementation • EMNLP 2020 • Sangwoo Cho, Kaiqiang Song, Chen Li, Dong Yu, Hassan Foroosh, Fei Liu
Amongst the best means to summarize is highlighting.
2 code implementations • 23 Nov 2019 • Kaiqiang Song, Logan Lebanoff, Qipeng Guo, Xipeng Qiu, xiangyang xue, Chen Li, Dong Yu, Fei Liu
If generating a word can introduce an erroneous relation to the summary, the behavior must be discouraged.
Ranked #26 on
Text Summarization
on GigaWord
1 code implementation • 23 Nov 2019 • Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu
In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.
Ranked #12 on
Text Summarization
on GigaWord
3 code implementations • ACL 2019 • Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.
1 code implementation • EMNLP 2018 • Logan Lebanoff, Kaiqiang Song, Fei Liu
Generating a text abstract from a set of documents remains a challenging task.
1 code implementation • COLING 2018 • Kaiqiang Song, Lin Zhao, Fei Liu
In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence.
Ranked #33 on
Text Summarization
on GigaWord