Search Results for author: Katja Filippova

Found 19 papers, 3 papers with code

Controlling Machine Translation for Multiple Attributes with Additive Interventions

no code implementations EMNLP 2021 Andrea Schioppa, David Vilar, Artem Sokolov, Katja Filippova

Fine-grained control of machine translation (MT) outputs along multiple attributes is critical for many modern MT applications and is a requirement for gaining users’ trust.

Attribute Machine Translation +2

Dissecting Recall of Factual Associations in Auto-Regressive Language Models

1 code implementation28 Apr 2023 Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson

Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute.

Attribute Attribute Extraction +1

Understanding Text Classification Data and Models Using Aggregated Input Salience

no code implementations10 Nov 2022 Sebastian Ebert, Alice Shoshana Jakobovits, Katja Filippova

Realizing when a model is right for a wrong reason is not trivial and requires a significant effort by model developers.

text-classification Text Classification

Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data

no code implementations Findings of the Association for Computational Linguistics 2020 Katja Filippova

Neural text generation (data- or text-to-text) demonstrates remarkable performance when training data is abundant which for many applications is not the case.

Text Generation

The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?

1 code implementation EMNLP (BlackboxNLP) 2020 Jasmijn Bastings, Katja Filippova

There is a recent surge of interest in using attention as explanation of model predictions, with mixed evidence on whether attention can be used as such.

We Need to Talk About Random Splits

1 code implementation EACL 2021 Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, Katja Filippova

We argue that random splits, like standard splits, lead to overly optimistic performance estimates.

Domain Adaptation

Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!

no code implementations CONLL 2018 Katharina Kann, Sascha Rothe, Katja Filippova

Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level.

Language Modelling Sentence +1

Eval all, trust a few, do wrong to none: Comparing sentence generation models

no code implementations21 Apr 2018 Ondřej Cífka, Aliaksei Severyn, Enrique Alfonseca, Katja Filippova

In this paper, we study recent neural generative models for text generation related to variational autoencoders.

Sentence Text Generation

Fast k-best Sentence Compression

no code implementations28 Oct 2015 Katja Filippova, Enrique Alfonseca

A popular approach to sentence compression is to formulate the task as a constrained optimization problem and solve it with integer linear programming (ILP) tools.

Sentence Sentence Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.