Linear-Probe Classification

9 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Linear-Probe Classification models and implementations

Datasets


Most implemented papers

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

google-research/bert NAACL 2019

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

UKPLab/sentence-transformers IJCNLP 2019

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

SimCSE: Simple Contrastive Learning of Sentence Embeddings

princeton-nlp/SimCSE EMNLP 2021

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

JohnGiorgi/DeCLUTR ACL 2021

Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data.

Text and Code Embeddings by Contrastive Pre-Training

openmatch/coco-dr 24 Jan 2022

Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20. 8% relative improvement over prior best work on code search.

Neural Eigenfunctions Are Structured Representation Learners

thudzj/NEigenmaps 23 Oct 2022

Unlike prior spectral methods such as Laplacian Eigenmap that operate in a nonparametric manner, Neural Eigenmap leverages NeuralEF to parametrically model eigenfunctions using a neural network.

Scaling Vision Transformers to 22 Billion Parameters

lucidrains/flash-cosine-sim-attention 10 Feb 2023

The scaling of Transformers has driven breakthrough capabilities for language models.

DINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery with Multi-sized Local Crops

wennyxy/dino-mc 12 Mar 2023

Due to the costly nature of remote sensing image labeling and the large volume of available unlabeled imagery, self-supervised methods that can learn feature representations without manual annotation have received great attention.

SODA: Bottleneck Diffusion Models for Representation Learning

futurexiang/soda 29 Nov 2023

We introduce SODA, a self-supervised diffusion model, designed for representation learning.