Citation Prediction

5 papers with code • 2 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

SPECTER: Document-level Representation Learning using Citation-informed Transformers

allenai/specter ACL 2020

We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.

BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

UKPLab/beir 17 Apr 2021

To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval.

Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings

no code yet • 14 Feb 2022

Learning scientific document representations can be substantially improved through contrastive learning objectives, where the challenge lies in creating positive and negative training samples that encode the desired similarity semantics.

SGPT: GPT Sentence Embeddings for Semantic Search

muennighoff/sgpt 17 Feb 2022

A 5. 8 billion parameter SGPT-BE outperforms the best available sentence embeddings by 6% setting a new state-of-the-art on BEIR.

No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval

guilhermemr04/scaling-zero-shot-retrieval 6 Jun 2022

This has made distilled and dense models, due to latency constraints, the go-to choice for deployment in real-world retrieval applications.