Few-Shot Learning

1037 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Latest papers with no code

Graph Machine Learning in the Era of Large Language Models (LLMs)

no code yet • 23 Apr 2024

Meanwhile, graphs, especially knowledge graphs, are rich in reliable factual knowledge, which can be utilized to enhance the reasoning capabilities of LLMs and potentially alleviate their limitations such as hallucinations and the lack of explainability.

Identifying Fairness Issues in Automatically Generated Testing Content

no code yet • 23 Apr 2024

Natural language generation tools are powerful and effective for generating content.

Text-dependent Speaker Verification (TdSV) Challenge 2024: Challenge Evaluation Plan

no code yet • 20 Apr 2024

This document outlines the Text-dependent Speaker Verification (TdSV) Challenge 2024, which centers on analyzing and exploring novel approaches for text-dependent speaker verification.

When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes

no code yet • 18 Apr 2024

We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes.

Stance Detection on Social Media with Fine-Tuned Large Language Models

no code yet • 18 Apr 2024

This study emphasizes the potential of LLMs in stance detection and calls for more extensive research in this field.

Many-Shot In-Context Learning

no code yet • 17 Apr 2024

Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can learn high-dimensional functions with numerical inputs.

Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction

no code yet • 15 Apr 2024

The framework includes an evaluation model that can extract related entity pairs with high precision.

CryoMAE: Few-Shot Cryo-EM Particle Picking with Masked Autoencoders

no code yet • 15 Apr 2024

Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution.

GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning

no code yet • 14 Apr 2024

The emergence of Large Language Models (LLMs) with capabilities like In-Context Learning (ICL) has ushered in new possibilities for data generation across various domains while minimizing the need for extensive data collection and modeling techniques.

PM2: A New Prompting Multi-modal Model Paradigm for Few-shot Medical Image Classification

no code yet • 13 Apr 2024

The other is to classification on feature distribution of visual tokens from vision encoder.