Search Results for author: Kaiqiang Song

Found 28 papers, 19 papers with code

Polarity Calibration for Opinion Summarization

1 code implementation2 Apr 2024 Yuanyuan Lei, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Ruihong Huang, Dong Yu

To address this issue and make the summarizer express both sides of opinions, we introduce the concept of polarity calibration, which aims to align the polarity of output summary with that of input text.

Opinion Summarization

Can Large Language Models do Analytical Reasoning?

no code implementations6 Mar 2024 Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu

Our analytical reasoning embodies the tasks of letting large language models count how many points each team scores in a quarter in the NBA and NFL games.

Language Modelling Large Language Model

SportsMetrics: Blending Text and Numerical Data to Understand Information Fusion in LLMs

no code implementations15 Feb 2024 Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu

In this paper, we introduce four novel tasks centered around sports data analytics to evaluate the numerical reasoning and information fusion capabilities of LLMs.

SPECTRUM: Speaker-Enhanced Pre-Training for Long Dialogue Summarization

no code implementations31 Jan 2024 Sangwoo Cho, Kaiqiang Song, Chao Zhao, Xiaoyang Wang, Dong Yu

Multi-turn dialogues are characterized by their extended length and the presence of turn-taking conversations.

Language Modelling Large Language Model

InFoBench: Evaluating Instruction Following Ability in Large Language Models

1 code implementation7 Jan 2024 Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, PengFei Liu, Dong Yu

This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions.

Instruction Following

Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention

no code implementations14 Dec 2023 Kaiqiang Song, Xiaoyang Wang, Sangwoo Cho, Xiaoman Pan, Dong Yu

This paper introduces a novel approach to enhance the capabilities of Large Language Models (LLMs) in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information.

MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning

3 code implementations15 Nov 2023 Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, Dong Yu

Recognizing the need for a comprehensive evaluation of LMM chart understanding, we also propose a MultiModal Chart Benchmark (\textbf{MMC-Benchmark}), a comprehensive human-annotated benchmark with nine distinct tasks evaluating reasoning capabilities over charts.

Unsupervised Multi-document Summarization with Holistic Inference

no code implementations8 Sep 2023 Haopeng Zhang, Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Hongwei Wang, Jiawei Zhang, Dong Yu

SRI balances the importance and diversity of a subset of sentences from the source documents and can be calculated in unsupervised and adaptive manners.

Document Summarization Extractive Summarization +1

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

no code implementations1 Aug 2023 Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen

Compositional generalization empowers the LLMs to solve problems that are harder than the ones they have seen (i. e., easy-to-hard generalization), which is a critical reasoning capability of human-like intelligence.

Math Math Word Problem Solving

DecipherPref: Analyzing Influential Factors in Human Preference Judgments via GPT-4

no code implementations24 May 2023 Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Fei Liu

Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values.

Informativeness

PIVOINE: Instruction Tuning for Open-world Information Extraction

1 code implementation24 May 2023 Keming Lu, Xiaoman Pan, Kaiqiang Song, Hongming Zhang, Dong Yu, Jianshu Chen

In particular, we construct INSTRUCTOPENWIKI, a substantial instruction tuning dataset for Open-world IE enriched with a comprehensive corpus, extensive annotations, and diverse instructions.

Instruction Following Language Modelling +1

Open-Domain Event Graph Induction for Mitigating Framing Bias

no code implementations22 May 2023 Siyi Liu, Hongming Zhang, Hongwei Wang, Kaiqiang Song, Dan Roth, Dong Yu

However, none of the existing methods have explicitly addressed the issue of framing bias that is inherent in news articles.

OASum: Large-Scale Open Domain Aspect-based Summarization

1 code implementation19 Dec 2022 Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu

Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model.

NarraSum: A Large-Scale Dataset for Abstractive Narrative Summarization

1 code implementation2 Dec 2022 Chao Zhao, Faeze Brahman, Kaiqiang Song, Wenlin Yao, Dian Yu, Snigdha Chaturvedi

To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset.

Natural Language Understanding

Salience Allocation as Guidance for Abstractive Summarization

1 code implementation22 Oct 2022 Fei Wang, Kaiqiang Song, Hongming Zhang, Lifeng Jin, Sangwoo Cho, Wenlin Yao, Xiaoyang Wang, Muhao Chen, Dong Yu

Recent literature adds extractive summaries as guidance for abstractive summarization models to provide hints of salient content and achieves better performance.

Abstractive Text Summarization

Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension

1 code implementation ACL 2022 Chao Zhao, Wenlin Yao, Dian Yu, Kaiqiang Song, Dong Yu, Jianshu Chen

Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances, which are either scattered around or implicitly implied in different turns of conversations.

A New Approach to Overgenerating and Scoring Abstractive Summaries

1 code implementation NAACL 2021 Kaiqiang Song, Bingqing Wang, Zhe Feng, Fei Liu

We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs.

Text Summarization

Automatic Summarization of Open-Domain Podcast Episodes

no code implementations9 Nov 2020 Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu

Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts to serve as input to the summarizer; (ii) striking a balance between the amount and quality of training instances; (iii) the appropriate summary length and start/end points.

Abstractive Text Summarization

Controlling the Amount of Verbatim Copying in Abstractive Summarization

1 code implementation23 Nov 2019 Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu

In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.

Abstractive Text Summarization Language Modelling

Structure-Infused Copy Mechanisms for Abstractive Summarization

1 code implementation COLING 2018 Kaiqiang Song, Lin Zhao, Fei Liu

In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence.

Abstractive Text Summarization Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.