Search Results for author: Aneesh Pappu

Found 2 papers, 1 papers with code

Making Graph Neural Networks Worth It for Low-Data Molecular Machine Learning

no code implementations24 Nov 2020 Aneesh Pappu, Brooks Paige

When we find that they are not, we explore pretraining and the meta-learning method MAML (and variants FO-MAML and ANIL) for improving graph neural network performance by transfer learning from related tasks.

BIG-bench Machine Learning Meta-Learning +1

Do Massively Pretrained Language Models Make Better Storytellers?

1 code implementation CONLL 2019 Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning

Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks.

Natural Language Understanding Story Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.