Generalized Few-Shot Learning
9 papers with code • 3 benchmarks • 5 datasets
Most implemented papers
Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions
Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels.
Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders
Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.
Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders
Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.
Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning
In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes.
From Generalized zero-shot learning to long-tail with class descriptors
Real-world data is predominantly unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
Generalized Few-Shot Video Classification with Video Retrieval and Feature Generation
Few-shot learning aims to recognize novel classes from a few examples.
Dynamic Semantic Matching and Aggregation Network for Few-shot Intent Detection
Although recent works demonstrate that multi-level matching plays an important role in transferring learned knowledge from seen training classes to novel testing classes, they rely on a static similarity measure and overly fine-grained matching components.
Exploring the Limits of Natural Language Inference Based Setup for Few-Shot Intent Detection
Our method achieves state-of-the-art results on 1-shot and 5-shot intent detection task with gains ranging from 2-8\% points in F1 score on four benchmark datasets.
Better Generalized Few-Shot Learning Even Without Base Data
In this paper, we overcome this limitation by proposing a simple yet effective normalization method that can effectively control both mean and variance of the weight distribution of novel classes without using any base samples and thereby achieve a satisfactory performance on both novel and base classes.