Text Summarization

369 papers with code • 33 benchmarks • 87 datasets

Text Summarization is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form.

There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.

Libraries

Use these libraries to find Text Summarization models and implementations

Latest papers with no code

MM-PhyRLHF: Reinforcement Learning Framework for Multimodal Physics Question-Answering

no code yet • 19 Apr 2024

We employ the LLaVA open-source model to answer multimodal physics MCQs and compare the performance with and without using RLHF.

V2Xum-LLM: Cross-Modal Video Summarization with Temporal Prompt Instruction Tuning

no code yet • 18 Apr 2024

Recent efforts have been made to expand from unimodal to multimodal video summarization, categorizing the task into three sub-tasks based on the summary's modality: video-to-video (V2V), video-to-text (V2T), and a combination of video and text summarization (V2VT).

AI-Enhanced Cognitive Behavioral Therapy: Deep Learning and Large Language Models for Extracting Cognitive Pathways from Social Media Texts

no code yet • 17 Apr 2024

Cognitive Behavioral Therapy (CBT) is an effective technique for addressing the irrational thoughts stemming from mental illnesses, but it necessitates precise identification of cognitive pathways to be successfully implemented in patient care.

Prompt-tuning for Clickbait Detection via Text Summarization

no code yet • 17 Apr 2024

To address this problem, we propose a prompt-tuning method for clickbait detection via text summarization in this paper, text summarization is introduced to summarize the contents, and clickbait detection is performed based on the similarity between the generated summary and the contents.

KG-CTG: Citation Generation through Knowledge Graph-guided Large Language Models

no code yet • 15 Apr 2024

Citation Text Generation (CTG) is a task in natural language processing (NLP) that aims to produce text that accurately cites or references a cited document within a source document.

Unveiling LLM Evaluation Focused on Metrics: Challenges and Solutions

no code yet • 14 Apr 2024

The overarching goal is to furnish researchers with a pragmatic guide for effective LLM evaluation and metric selection, thereby advancing the understanding and application of these large language models.

RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data

no code yet • 11 Apr 2024

Through comparative experiments, we demonstrate how different data sources contribute to financial risk assessment and discuss the critical role of LLMs in this context.

Neural Sequence-to-Sequence Modeling with Attention by Leveraging Deep Learning Architectures for Enhanced Contextual Understanding in Abstractive Text Summarization

no code yet • 8 Apr 2024

A deep sequence-to-sequence (seq2seq) model with an attention mechanism is employed to predict a generalized summary based on the vector representation.

FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping

no code yet • 5 Apr 2024

In this work, we observed the saturation of computationally expensive feed-forward blocks of LLM layers and proposed FFN-SkipLLM, which is a novel fine-grained skip strategy of autoregressive LLMs.

Hallucination Diversity-Aware Active Learning for Text Summarization

no code yet • 2 Apr 2024

Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.