Headline Generation
33 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
NSINA: A News Corpus for Sinhala
NSINA is the largest news corpus for Sinhala, available up to date.
Deep Reinforcement Learning For Sequence to Sequence Models
In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with sequence-to-sequence models that enable remembering long-term memories.
Don't Just Scratch the Surface: Enhancing Word Representations for Korean with Hanja
We propose a simple yet effective approach for improving Korean word representations using additional linguistic annotation (i. e. Hanja).
IT5: Text-to-text Pretraining for Italian Language Understanding and Generation
We introduce IT5, the first family of encoder-decoder transformer models pretrained specifically on Italian.
Question Answering as an Automatic Evaluation Metric for News Article Summarization
Recent work in the field of automatic summarization and headline generation focuses on maximizing ROUGE scores for various news datasets.
Advances of Transformer-Based Models for News Headline Generation
Pretrained language models based on Transformer architecture are the reason for recent breakthroughs in many areas of NLP, including sentiment analysis, question answering, named entity recognition.
Few-Shot Text Generation with Pattern-Exploiting Training
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion.
Direct Output Connection for a High-Rank Language Model
This paper proposes a state-of-the-art recurrent neural network (RNN) language model that combines probability distributions computed not only from a final RNN layer but also from middle layers.
Importance of Copying Mechanism for News Headline Generation
News headline generation is an essential problem of text summarization because it is constrained, well-defined, and is still hard to solve.
Data-efficient Neural Text Compression with Interactive Learning
Neural sequence-to-sequence models have been successfully applied to text compression.