Meta-Learning
1180 papers with code • 4 benchmarks • 19 datasets
Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
( Image credit: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks )
Libraries
Use these libraries to find Meta-Learning models and implementationsDatasets
Latest papers
Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation
Compositional generalization, representing the model's ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods.
Efficient Automatic Tuning for Data-driven Model Predictive Control via Meta-Learning
AutoMPC is a Python package that automates and optimizes data-driven model predictive control.
Meta-Learning with Generalized Ridge Regression: High-dimensional Asymptotics, Optimality and Hyper-covariance Estimation
Finally, we propose and analyze an estimator of the inverse covariance matrix of random regression coefficients based on data from the training tasks.
Cross-domain Multi-modal Few-shot Object Detection via Rich Text
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features.
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
Conventional wisdom suggests parameter-efficient fine-tuning of foundation models as the state-of-the-art method for transfer learning in vision, replacing the rich literature of alternatives such as meta-learning.
XB-MAML: Learning Expandable Basis Parameters for Effective Meta-Learning with Wide Task Coverage
Meta-learning, which pursues an effective initialization model, has emerged as a promising approach to handling unseen tasks.
Online Adaptation of Language Models with a Memory of Amortized Contexts
We propose an amortized feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank.
Rethinking of Encoder-based Warm-start Methods in Hyperparameter Optimization
In this work, we evaluate Dataset2Vec and liltab on two common meta-tasks - representing entire datasets and hyperparameter optimization warm-start.
Learning to Defer to a Population: A Meta-Learning Approach
The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert.
On Latency Predictors for Neural Architecture Search
We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes.