Meta-learning is a methodology considered with "learning to learn" machine learning algorithms.
To solve such few-shot problem with the evolving attack, we propose a meta-learning based robust detection method to detect new adversarial attacks with limited examples.
This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.
Quantum Neural Networks (QNNs) are a promising variational learning paradigm with applications to near-term quantum processors, however they still face some significant challenges.
Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.
For many machine learning algorithms, predictive performance is critically affected by the hyperparameter values used to train them.
Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks.
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks.
Finally, we demonstrate that a basic online updating strategy with our learned representation is competitive with rehearsal based methods for continual learning.