Regularizing Meta-Learning via Gradient Dropout

13 Apr 2020Hung-Yu TsengYi-Wen ChenYi-Hsuan TsaiSifei LiuYen-Yu LinMing-Hsuan Yang

With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However, meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.