Search Results for author: Katrin Erk

Found 40 papers, 14 papers with code

Adjusting Interpretable Dimensions in Embedding Space with Human Judgments

no code implementations3 Apr 2024 Katrin Erk, Marianna Apidianaki

We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity.

Object

X-PARADE: Cross-Lingual Textual Entailment and Information Divergence across Paragraphs

1 code implementation16 Sep 2023 Juan Diego Rodriguez, Katrin Erk, Greg Durrett

Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild.

Fact Checking Machine Translation +1

Did they answer? Subjective acts and intents in conversational discourse

1 code implementation NAACL 2021 Elisa Ferracane, Greg Durrett, Junyi Jessy Li, Katrin Erk

Discourse signals are often implicit, leaving it up to the interpreter to draw the required inferences.

valid

Leveraging WordNet Paths for Neural Hypernym Prediction

1 code implementation COLING 2020 Yejin Cho, Juan Diego Rodriguez, Yifan Gao, Katrin Erk

We formulate the problem of hypernym prediction as a sequence generation task, where the sequences are taxonomy paths in WordNet.

How to marry a star: probabilistic constraints for meaning in context

1 code implementation SCiL 2021 Katrin Erk, Aurelie Herbelot

In this paper, we derive a notion of 'word meaning in context' that characterizes meaning as both intensional and conceptual.

Sentence

Narrative Interpolation for Generating and Understanding Stories

no code implementations17 Aug 2020 Su Wang, Greg Durrett, Katrin Erk

We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings by interpolation: for example, we are told that Jim went hiking and at the end Jim needed to be rescued, and we want the model to incrementally generate steps along the way.

Sentence Story Generation

Attending to Entities for Better Text Understanding

no code implementations11 Nov 2019 Pengxiang Cheng, Katrin Erk

Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BERT, XLNet, etc.)

LAMBADA

Query-Focused Scenario Construction

no code implementations IJCNLP 2019 Su Wang, Greg Durrett, Katrin Erk

The news coverage of events often contains not one but multiple incompatible accounts of what happened.

Clustering

Implicit Argument Prediction as Reading Comprehension

1 code implementation8 Nov 2018 Pengxiang Cheng, Katrin Erk

Implicit arguments, which cannot be detected solely through syntactic cues, make it harder to extract predicate-argument tuples.

Reading Comprehension

Picking Apart Story Salads

no code implementations EMNLP 2018 Su Wang, Eric Holgate, Greg Durrett, Katrin Erk

During natural disasters and conflicts, information about what happened is often confusing, messy, and distributed across many sources.

Clustering

Deep Neural Models of Semantic Shift

no code implementations NAACL 2018 Alex Rosenfeld, Katrin Erk

This evaluation quantitatively measures how well a model captures the semantic trajectory of a word over time.

Time Series Time Series Analysis

Modeling Semantic Plausibility by Injecting World Knowledge

1 code implementation NAACL 2018 Su Wang, Greg Durrett, Katrin Erk

Distributional data tells us that a man can swallow candy, but not that a man can swallow a paintball, since this is never attested.

World Knowledge

Implicit Argument Prediction with Event Knowledge

1 code implementation NAACL 2018 Pengxiang Cheng, Katrin Erk

Implicit arguments are not syntactically connected to their predicates, and are therefore hard to extract.

Distributional Modeling on a Diet: One-shot Word Learning from Text Only

no code implementations IJCNLP 2017 Su Wang, Stephen Roller, Katrin Erk

We test whether distributional models can do one-shot learning of definitional properties from text only.

One-Shot Learning

Representing Meaning with a Combination of Logical and Distributional Models

1 code implementation CL 2016 I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, Raymond J. Mooney

In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.

Lexical Entailment Natural Language Inference +2

Cannot find the paper you are looking for? You can Submit a new open access paper.