no code implementations • 24 Nov 2020 • Aneesh Pappu, Brooks Paige
When we find that they are not, we explore pretraining and the meta-learning method MAML (and variants FO-MAML and ANIL) for improving graph neural network performance by transfer learning from related tasks.
1 code implementation • CONLL 2019 • Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning
Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks.