Citation Intent Classification
10 papers with code • 2 benchmarks • 4 datasets
Identifying the reason why an author cited another author.
Libraries
Use these libraries to find Citation Intent Classification models and implementationsMost implemented papers
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Deep contextualized word representations
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP.
Structural Scaffolds for Citation Intent Classification in Scientific Publications
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
ImpactCite: An XLNet-based method for Citation Impact Analysis
Therefore, citation impact analysis (which includes sentiment and intent classification) enables us to quantify the quality of the citations which can eventually assist us in the estimation of ranking and impact.
Cross-Lingual Citations in English Papers: A Large-Scale Analysis of Prevalence, Usage, and Impact
Citation information in scholarly data is an important source of insight into the reception of publications and the scholarly discourse.
CitePrompt: Using Prompts to Identify Citation Intent in Scientific Papers
For the ACL-ARC dataset, we report a 53. 86% F1 score for the zero-shot setting, which improves to 63. 61% and 66. 99% for the 5-shot and 10-shot settings, respectively.