continual few-shot learning
7 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in continual few-shot learning
Most implemented papers
Defining Benchmarks for Continual Few-Shot Learning
Both few-shot and continual learning have seen substantial progress in the last years due to the introduction of proper benchmarks.
Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.
Constrained Few-shot Class-incremental Learning
Moreover, it is imperative that such learning must respect certain memory and computational constraints such as (i) training samples are limited to only a few per class, (ii) the computational cost of learning a novel class remains constant, and (iii) the memory footprint of the model grows at most linearly with the number of classes observed.
Neural Stored-program Memory
Neural networks powered with external memory simulate computer behaviors.
ACIL: Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively.
Expanding continual few-shot learning benchmarks to include recognition of specific instances
Continual learning and few-shot learning are important frontiers in progress towards broader Machine Learning (ML) capabilities.