Linear-Probe Classification

7 papers with code • 2 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Linear-Probe Classification models and implementations

Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

UKPLab/sentence-transformers IJCNLP 2019

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

SimCSE: Simple Contrastive Learning of Sentence Embeddings

princeton-nlp/SimCSE EMNLP 2021

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

JohnGiorgi/DeCLUTR ACL 2021

Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data.

Text and Code Embeddings by Contrastive Pre-Training

openmatch/coco-dr 24 Jan 2022

Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20. 8% relative improvement over prior best work on code search.

Neural Eigenfunctions Are Structured Representation Learners

thudzj/NEigenmaps 23 Oct 2022

In this paper, we introduce a scalable method for learning structured, adaptive-length deep representations.

DINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery with Multi-sized Local Crops

wennyxy/dino-mc 12 Mar 2023

Due to the costly nature of remote sensing image labeling and the large volume of available unlabeled imagery, self-supervised methods that can learn feature representations without manual annotation have received great attention.