Search Results for author: Livio Baldini Soares

Found 11 papers, 6 papers with code

Adaptable and Interpretable Neural MemoryOver Symbolic Knowledge

no code implementations NAACL 2021 Pat Verga, Haitian Sun, Livio Baldini Soares, William Cohen

Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive.

Question Answering

Evaluating Explanations: How much do explanations from the teacher aid students?

1 code implementation1 Dec 2020 Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen

While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated.

Question Answering Text Classification

QED: A Framework and Dataset for Explanations in Question Answering

1 code implementation8 Sep 2020 Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, Michael Collins

A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.

Explanation Generation Natural Questions +1

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

no code implementations2 Jul 2020 Pat Verga, Haitian Sun, Livio Baldini Soares, William W. Cohen

Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information.

Language Modelling Question Answering

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

no code implementations AKBC 2020 Thibault Févry, Nicholas FitzGerald, Livio Baldini Soares, Tom Kwiatkowski

In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links.

Entity Linking

New Protocols and Negative Results for Textual Entailment Data Collection

1 code implementation EMNLP 2020 Samuel R. Bowman, Jennimaria Palomaki, Livio Baldini Soares, Emily Pitler

Natural language inference (NLI) data has proven useful in benchmarking and, especially, as pretraining data for tasks requiring language understanding.

Natural Language Inference Transfer Learning

Entities as Experts: Sparse Memory Access with Entity Supervision

1 code implementation EMNLP 2020 Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, Tom Kwiatkowski

We introduce a new model - Entities as Experts (EAE) - that can access distinct memories of the entities mentioned in a piece of text.

Language Modelling TriviaQA

Learning Cross-Context Entity Representations from Text

no code implementations11 Jan 2020 Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski

Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.

Entity Linking Language Modelling +1

Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories

no code implementations ICLR Workshop LLD 2019 Jeffrey Ling, Nicholas FitzGerald, Livio Baldini Soares, David Weiss, Tom Kwiatkowski

Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.

Entity Typing Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.