Search Results for author: Pat Verga

Found 12 papers, 3 papers with code

MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text

no code implementations6 Oct 2022 Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, William W. Cohen

While language Models store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs.

Open-Ended Question Answering Retrieval +2

QA Is the New KR: Question-Answer Pairs as Knowledge Bases

no code implementations1 Jul 2022 Wenhu Chen, William W. Cohen, Michiel de Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting

In this position paper, we propose a new approach to generating a type of knowledge base (KB) from text, based on question generation and entity linking.

Entity Linking Position +2

Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization

no code implementations28 Apr 2022 Yue Dong, John Wieting, Pat Verga

In this work, we show that these entities are not aberrations, but they instead require utilizing external world knowledge to infer reasoning paths from entities in the source.

Abstractive Text Summarization World Knowledge

Multilingual Fact Linking

1 code implementation AKBC 2021 Keshav Kolluru, Martin Rezk, Pat Verga, William W. Cohen, Partha Talukdar

This makes it challenging to link KG facts to sentences in languages other than the limited set of languages.

Re-Ranking Retrieval +2

Adaptable and Interpretable Neural MemoryOver Symbolic Knowledge

no code implementations NAACL 2021 Pat Verga, Haitian Sun, Livio Baldini Soares, William Cohen

Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive.

Question Answering

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

no code implementations2 Jul 2020 Pat Verga, Haitian Sun, Livio Baldini Soares, William W. Cohen

Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information.

Language Modelling Question Answering

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders

3 code implementations3 Apr 2019 Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum

We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Constituency Parsing Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.