Keyphrase Extraction
47 papers with code • 9 benchmarks • 6 datasets
A classic task to extract salient phrases that best summarize a document, which essentially has two stages: candidate generation and keyphrase ranking.
Most implemented papers
Deep Keyphrase Generation
Keyphrase provides highly-condensed information that can be effectively used for understanding, organizing and retrieving text content.
Simple Unsupervised Keyphrase Extraction using Sentence Embeddings
EmbedRank achieves higher F-scores than graph-based state of the art systems on standard datasets and is suitable for real-time processing of large amounts of Web data.
A Review of Keyphrase Extraction
Keyphrase extraction is a textual information processing task concerned with the automatic extraction of representative and characteristic phrases from a document that express all the key aspects of its content.
Open Domain Web Keyphrase Extraction Beyond Language Modeling
This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality.
Finding Black Cat in a Coal Cellar -- Keyphrase Extraction & Keyphrase-Rubric Relationship Classification from Complex Assignments
Diversity in content and open-ended questions are inherent in complex assignments across online graduate programs.
Capturing Global Informativeness in Open Domain Keyphrase Extraction
Open-domain KeyPhrase Extraction (KPE) aims to extract keyphrases from documents without domain or quality restrictions, e. g., web pages with variant domains and qualities.
Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation
Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document.
UCPhrase: Unsupervised Context-aware Quality Phrase Tagging
Training a conventional neural tagger based on silver labels usually faces the risk of overfitting phrase surface names.
Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering
Previous researches found that in-context learning is an effective approach to exploiting LLM, by using a few task-related labeled data as demonstration examples to construct a few-shot prompt for answering new questions.