Search Results for author: Nora Kassner

Found 11 papers, 6 papers with code

EDIN: An End-to-end Benchmark and Pipeline for Unknown Entity Discovery and Indexing

no code implementations25 May 2022 Nora Kassner, Fabio Petroni, Mikhail Plekhanov, Sebastian Riedel, Nicola Cancedda

This paper created the Unknown Entity Discovery and Indexing (EDIN) benchmark where unknown entities, that is entities without a description in the knowledge base and labeled mentions, have to be integrated into an existing entity linking system.

Language Models As or For Knowledge Bases

no code implementations10 Oct 2021 Simon Razniewski, Andrew Yates, Nora Kassner, Gerhard Weikum

Pre-trained language models (LMs) have recently gained attention for their potential as an alternative to (or proxy for) explicit knowledge bases (KBs).

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

no code implementations EMNLP 2021 Nora Kassner, Oyvind Tafjord, Hinrich Schütze, Peter Clark

We show that, in a controlled experimental setting, these two mechanisms result in more consistent beliefs in the overall system, improving both the accuracy and consistency of its answers over time.

Language Modelling Pretrained Language Models

Static Embeddings as Efficient Knowledge Bases?

1 code implementation NAACL 2021 Philipp Dufter, Nora Kassner, Hinrich Schütze

Recent research investigates factual knowledge stored in large pretrained language models (PLMs).

Pretrained Language Models

Measuring and Improving Consistency in Pretrained Language Models

1 code implementation1 Feb 2021 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg

In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?

Pretrained Language Models

Dirichlet-Smoothed Word Embeddings for Low-Resource Settings

no code implementations LREC 2020 Jakob Jungmaier, Nora Kassner, Benjamin Roth

We evaluate on standard word similarity data sets and compare to word2vec and the recent state of the art for low-resource settings: Positive and Unlabeled (PU) Learning for word embeddings.

Word Embeddings Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.