We model retrieval decisions as latent variables over sets of relevant documents.
Ranked #4 on Open-Domain Question Answering on WebQuestions
In this work, we introduce a series of strong transformer models for multi-hop question generation, including a graph-augmented transformer that leverages relations between entities in the text.
Our empirical analysis demonstrates that these syntax-infused transformers obtain state-of-the-art results on SRL and relation extraction tasks.
We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources.
Ranked #1 on Few-Shot Image Classification on Meta-Dataset Rank
Reinforcement learning algorithms are known to be sample inefficient, and often performance on one task can be substantially improved by leveraging information (e. g., via pre-training) on other related tasks.