Search Results for author: Javier Ferrando

Found 13 papers, 7 papers with code

LM Transparency Tool: Interactive Tool for Analyzing Transformer Language Models

1 code implementation10 Apr 2024 Igor Tufanov, Karen Hambardzumyan, Javier Ferrando, Elena Voita

We present the LM Transparency Tool (LM-TT), an open-source interactive toolkit for analyzing the internal workings of Transformer-based language models.

Decision Making

Information Flow Routes: Automatically Interpreting Language Models at Scale

1 code implementation27 Feb 2024 Javier Ferrando, Elena Voita

These routes can be represented as graphs where nodes correspond to token representations and edges to operations inside the network.

Neurons in Large Language Models: Dead, N-gram, Positional

no code implementations9 Sep 2023 Elena Voita, Javier Ferrando, Christoforos Nalmpantis

Specifically, we focus on the OPT family of models ranging from 125m to 66b parameters and rely only on whether an FFN neuron is activated or not.

Position

Automating Behavioral Testing in Machine Translation

1 code implementation5 Sep 2023 Javier Ferrando, Matthias Sperber, Hendra Setiawan, Dominic Telaar, Saša Hasan

Behavioral testing in NLP allows fine-grained evaluation of systems by examining their linguistic capabilities through the analysis of input-output behavior.

Machine Translation Translation

Toxicity in Multilingual Machine Translation at Scale

no code implementations6 Oct 2022 Marta R. Costa-jussà, Eric Smith, Christophe Ropers, Daniel Licht, Jean Maillard, Javier Ferrando, Carlos Escolano

We evaluate and analyze added toxicity when translating a large evaluation dataset (HOLISTICBIAS, over 472k sentences, covering 13 demographic axes) from English into 164 languages.

Hallucination Machine Translation +1

Towards Opening the Black Box of Neural Machine Translation: Source and Target Interpretations of the Transformer

1 code implementation23 May 2022 Javier Ferrando, Gerard I. Gállego, Belen Alastruey, Carlos Escolano, Marta R. Costa-jussà

In Neural Machine Translation (NMT), each token prediction is conditioned on the source sentence and the target prefix (what has been previously translated at a decoding step).

Machine Translation NMT +2

Measuring the Mixing of Contextual Information in the Transformer

2 code implementations8 Mar 2022 Javier Ferrando, Gerard I. Gállego, Marta R. Costa-jussà

The Transformer architecture aggregates input information through the self-attention mechanism, but there is no clear understanding of how this information is mixed across the entire model.

Improving accuracy and speeding up Document Image Classification through parallel systems

1 code implementation16 Jun 2020 Javier Ferrando, Juan Luis Dominguez, Jordi Torres, Raul Garcia, David Garcia, Daniel Garrido, Jordi Cortada, Mateo Valero

This paper presents a study showing the benefits of the EfficientNet models compared with heavier Convolutional Neural Networks (CNNs) in the Document Classification task, essential problem in the digitalization process of institutions.

Document Classification Document Image Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.