Paper

Regularizing Meta-Learning via Gradient Dropout

With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However, meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize... (read more)

Results in Papers With Code
(↓ scroll down to see all results)