no code implementations • IWCS (ACL) 2021 • Eric Holgate, Katrin Erk
What is the best way to learn embeddings for entities, and what can be learned from them?
no code implementations • NAACL (DADC) 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
1 code implementation • 11 Aug 2024 • Sai Vallurupalli, Katrin Erk, Francis Ferraro
It is important to be able to acquire the knowledge needed for this understanding, though doing so is challenging.
no code implementations • 3 Apr 2024 • Katrin Erk, Marianna Apidianaki
We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity.
1 code implementation • 16 Sep 2023 • Juan Diego Rodriguez, Katrin Erk, Greg Durrett
Aligned paragraphs are sourced from Wikipedia pages in different languages, reflecting real information divergences observed in the wild.
1 code implementation • 29 May 2023 • Gabriella Chronis, Kyle Mahowald, Katrin Erk
We study semantic construal in grammatical constructions using large language models.
1 code implementation • 5 Dec 2022 • Sai Vallurupalli, Sayontan Ghosh, Katrin Erk, Niranjan Balasubramanian, Francis Ferraro
Knowledge about outcomes is critical for complex event understanding but is hard to acquire.
no code implementations • 29 Jun 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
1 code implementation • NAACL 2021 • Elisa Ferracane, Greg Durrett, Junyi Jessy Li, Katrin Erk
Discourse signals are often implicit, leaving it up to the interpreter to draw the required inferences.
1 code implementation • COLING 2020 • Yejin Cho, Juan Diego Rodriguez, Yifan Gao, Katrin Erk
We formulate the problem of hypernym prediction as a sequence generation task, where the sequences are taxonomy paths in WordNet.
1 code implementation • CONLL 2020 • Gabriella Chronis, Katrin Erk
This paper investigates contextual language models, which produce token representations, as a resource for lexical semantics at the word or type level.
1 code implementation • EMNLP 2020 • Venkata Subrahmanyan Govindarajan, Benjamin T Chen, Rebecca Warholic, Katrin Erk, Junyi Jessy Li
Humans use language to accomplish a wide variety of tasks - asking for and giving advice being one of them.
1 code implementation • SCiL 2021 • Katrin Erk, Aurelie Herbelot
In this paper, we derive a notion of 'word meaning in context' that characterizes meaning as both intensional and conceptual.
no code implementations • 17 Aug 2020 • Su Wang, Greg Durrett, Katrin Erk
We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings by interpolation: for example, we are told that Jim went hiking and at the end Jim needed to be rescued, and we want the model to incrementally generate steps along the way.
no code implementations • 11 Nov 2019 • Pengxiang Cheng, Katrin Erk
Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BERT, XLNet, etc.)
no code implementations • IJCNLP 2019 • Su Wang, Greg Durrett, Katrin Erk
The news coverage of events often contains not one but multiple incompatible accounts of what happened.
1 code implementation • ACL 2019 • Elisa Ferracane, Greg Durrett, Junyi Jessy Li, Katrin Erk
Discourse structure is integral to understanding a text and is helpful in many NLP tasks.
1 code implementation • WS 2019 • Elisa Ferracane, Titan Page, Junyi Jessy Li, Katrin Erk
The first step in discourse analysis involves dividing a text into segments.
1 code implementation • 8 Nov 2018 • Pengxiang Cheng, Katrin Erk
Implicit arguments, which cannot be detected solely through syntactic cues, make it harder to extract predicate-argument tuples.
no code implementations • EMNLP 2018 • Su Wang, Eric Holgate, Greg Durrett, Katrin Erk
During natural disasters and conflicts, information about what happened is often confusing, messy, and distributed across many sources.
no code implementations • NAACL 2018 • Alex Rosenfeld, Katrin Erk
This evaluation quantitatively measures how well a model captures the semantic trajectory of a word over time.
1 code implementation • NAACL 2018 • Su Wang, Greg Durrett, Katrin Erk
Distributional data tells us that a man can swallow candy, but not that a man can swallow a paintball, since this is never attested.
1 code implementation • NAACL 2018 • Pengxiang Cheng, Katrin Erk
Implicit arguments are not syntactically connected to their predicates, and are therefore hard to extract.
no code implementations • IJCNLP 2017 • Su Wang, Stephen Roller, Katrin Erk
We test whether distributional models can do one-shot learning of definitional properties from text only.
no code implementations • EMNLP 2016 • Stephen Roller, Katrin Erk
We consider the task of predicting lexical entailment using distributional vectors.
1 code implementation • CL 2016 • I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, Raymond J. Mooney
In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.