no code implementations • EMNLP 2021 • Andrea Schioppa, David Vilar, Artem Sokolov, Katja Filippova
Fine-grained control of machine translation (MT) outputs along multiple attributes is critical for many modern MT applications and is a requirement for gaining users’ trust.
1 code implementation • 28 Apr 2023 • Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson
Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute.
no code implementations • 27 Feb 2023 • Irina Bejan, Artem Sokolov, Katja Filippova
Increasingly larger datasets have become a standard ingredient to advancing the state-of-the-art in NLP.
no code implementations • 10 Nov 2022 • Sebastian Ebert, Alice Shoshana Jakobovits, Katja Filippova
Realizing when a model is right for a wrong reason is not trivial and requires a significant effort by model developers.
no code implementations • 27 Jan 2022 • Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova
We posit that folk concepts of behavior provide us with a "language" that humans understand behavior with.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Katja Filippova
Neural text generation (data- or text-to-text) demonstrates remarkable performance when training data is abundant which for many applications is not the case.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Jasmijn Bastings, Katja Filippova
There is a recent surge of interest in using attention as explanation of model predictions, with mixed evidence on whether attention can be used as such.
1 code implementation • EACL 2021 • Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, Katja Filippova
We argue that random splits, like standard splits, lead to overly optimistic performance estimates.
no code implementations • CONLL 2018 • Katharina Kann, Sascha Rothe, Katja Filippova
Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level.
no code implementations • 21 Apr 2018 • Ondřej Cífka, Aliaksei Severyn, Enrique Alfonseca, Katja Filippova
In this paper, we study recent neural generative models for text generation related to variational autoencoders.
no code implementations • 28 Oct 2015 • Katja Filippova, Enrique Alfonseca
A popular approach to sentence compression is to formulate the task as a constrained optimization problem and solve it with integer linear programming (ILP) tools.