no code implementations • 20 Dec 2022 • Dheeru Dua, Emma Strubell, Sameer Singh, Pat Verga
Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on standard Wikipedia style benchmarks.
1 code implementation • 15 Dec 2022 • Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Lierni Sestorain Saralegui, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, Kellie Webster
We take human annotations as a gold standard and show that a correlated automatic metric is suitable for development.
no code implementations • 6 Oct 2022 • Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, William W. Cohen
While language Models store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs.
no code implementations • 1 Jul 2022 • Wenhu Chen, William W. Cohen, Michiel de Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting
In this position paper, we propose a new approach to generating a type of knowledge base (KB) from text, based on question generation and entity linking.
no code implementations • 28 Apr 2022 • Yue Dong, John Wieting, Pat Verga
In this work, we show that these entities are not aberrations, but they instead require utilizing external world knowledge to infer reasoning paths from entities in the source.
no code implementations • 10 Apr 2022 • Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, William Cohen
Retrieval augmented language models have recently become the standard for knowledge intensive tasks.
1 code implementation • AKBC 2021 • Keshav Kolluru, Martin Rezk, Pat Verga, William W. Cohen, Partha Talukdar
This makes it challenging to link KG facts to sentences in languages other than the limited set of languages.
no code implementations • NAACL 2021 • Pat Verga, Haitian Sun, Livio Baldini Soares, William Cohen
Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive.
no code implementations • 14 Feb 2021 • Haitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W. Cohen
We present the Open Predicate Query Language (OPQL); a method for constructing a virtual KB (VKB) trained entirely from text.
no code implementations • 2 Jul 2020 • Pat Verga, Haitian Sun, Livio Baldini Soares, William W. Cohen
Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information.
no code implementations • 2 Dec 2019 • Trapit Bansal, Pat Verga, Neha Choudhary, Andrew McCallum
Understanding the meaning of text often involves reasoning about entities and their relationships.
3 code implementations • 3 Apr 2019 • Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum
We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.