Search Results for author: Simeng Han

Found 10 papers, 6 papers with code

Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization

1 code implementation15 Nov 2023 Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, PengFei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan

Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) all LLM-based evaluation methods cannot achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation.

Benchmarking Text Summarization

Eliminating Reasoning via Inferring with Planning: A New Framework to Guide LLMs' Non-linear Thinking

no code implementations18 Oct 2023 Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang, Zi Lin, Simeng Han, Jingbo Shang

Chain-of-Thought(CoT) prompting and its variants explore equipping large language models (LLMs) with high-level reasoning abilities by emulating human-like linear cognition and logic.

Natural Language Inference

QTSumm: Query-Focused Summarization over Tabular Data

2 code implementations23 May 2023 Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, Arman Cohan

Motivated by this, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary.

Query-focused Summarization Table-to-Text Generation

Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation

1 code implementation14 Jun 2021 Xiang Lin, Simeng Han, Shafiq Joty

Advanced large-scale neural language models have led to significant success in many language generation tasks.

Text Generation

Resurrecting Submodularity for Neural Text Generation

no code implementations8 Nov 2019 Simeng Han, Xiang Lin, Shafiq Joty

The resulting attention module offers an architecturally simple and empirically effective method to improve the coverage of neural text generation.

Abstractive Text Summarization Text Generation

Hierarchical Pointer Net Parsing

1 code implementation IJCNLP 2019 Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing

Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.

Discourse Parsing Inductive Bias +1

Cannot find the paper you are looking for? You can Submit a new open access paper.