no code implementations • WMT (EMNLP) 2021 • Àlex R. Atrio, Gabriel Luthier, Axel Fahy, Giorgos Vernikos, Andrei Popescu-Belis, Ljiljana Dolamic
We then present the application of this system to the 2021 task for low-resource supervised Upper Sorbian (HSB) to German translation, in both directions.
no code implementations • SMM4H (COLING) 2022 • Oscar Lithgow-Serrano, Joseph Cornelius, Fabio Rinaldi, Ljiljana Dolamic
This paper describes our submissions to the Social Media Mining for Health Applications (SMM4H) shared task 2022.
no code implementations • 18 Dec 2024 • Julien Audiffren, Christophe Broillet, Ljiljana Dolamic, Philippe Cudré-Mauroux
In Extreme Multi Label Completion (XMLCo), the objective is to predict the missing labels of a collection of documents.
1 code implementation • 19 Nov 2024 • Sahar Sadrizadeh, César Descalzo, Ljiljana Dolamic, Pascal Frossard
In this paper, we propose a new type of adversarial attack against NMT models.
no code implementations • 5 Oct 2024 • Àlex R. Atrio, Alexis Allemann, Ljiljana Dolamic, Andrei Popescu-Belis
Many-to-one neural machine translation systems improve over one-to-one systems when training data is scarce.
2 code implementations • 5 Sep 2024 • Henrique Da Silva Gameiro, Andrei Kucharavy, Ljiljana Dolamic
With the emergence of widely available powerful LLMs, disinformation generated by large Language Models (LLMs) has become a major concern.
1 code implementation • 25 Jul 2024 • Cristian-Alexandru Botocan, Raphael Meier, Ljiljana Dolamic
Our attacks target less than 0. 04% of perturbed image area and integrate different spatial positioning of perturbed pixels: sparse positioning and pixels arranged in different contiguous shapes (row, column, diagonal, and patch).
no code implementations • 21 Jun 2024 • Manuel Mondal, Ljiljana Dolamic, Gérôme Bovet, Philippe Cudré-Mauroux, Julien Audiffren
Our findings suggest that the Revealed Belief of LLMs significantly differs from their Stated Answer and hint at multiple biases and misrepresentations that their beliefs may yield in many scenarios and outcomes.
1 code implementation • 29 Aug 2023 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
To evaluate the robustness of NMT models to our attack, we propose enhancements to existing black-box word-replacement-based attacks by incorporating output translations of the target NMT model and the output logits of a classifier within the attack process.
no code implementations • 14 Jun 2023 • Sahar Sadrizadeh, Clément Barbier, Ljiljana Dolamic, Pascal Frossard
First, we propose an optimization problem to generate adversarial examples that are semantically similar to the original sentences but destroy the translation generated by the target NMT model.
1 code implementation • 2 Jun 2023 • Benoist Wolleb, Romain Silvestri, Giorgos Vernikos, Ljiljana Dolamic, Andrei Popescu-Belis
Subword tokenization is the de facto standard for tokenization in neural language models and machine translation systems.
no code implementations • 20 May 2023 • Andrei Kucharavy, Rachid Guerraoui, Ljiljana Dolamic
In this paper, we show that a class of evolutionary algorithms (EAs) inspired by the Gillespie-Orr Mutational Landscapes model for natural evolution is formally equivalent to SGD in certain settings and, in practice, is well adapted to large ANNs.
no code implementations • 20 Apr 2023 • Andrei Kucharavy, Matteo Monti, Rachid Guerraoui, Ljiljana Dolamic
We then leverage this definition to show that a general class of gradient-free ML algorithms - ($1,\lambda$)-Evolutionary Search - can be combined with classical distributed consensus algorithms to generate gradient-free byzantine-resilient distributed learning algorithms.
no code implementations • 21 Mar 2023 • Andrei Kucharavy, Zachary Schillaci, Loïc Maréchal, Maxime Würsch, Ljiljana Dolamic, Remi Sabonnadiere, Dimitri Percia David, Alain Mermoud, Vincent Lenders
Generative Language Models gained significant attention in late 2022 / early 2023, notably with the introduction of models refined to act consistently with users' expectations of interactions with AI (conversational models).
1 code implementation • 2 Mar 2023 • Sahar Sadrizadeh, AmirHossein Dabiri Aghdam, Ljiljana Dolamic, Pascal Frossard
In this paper, we propose a new targeted adversarial attack against NMT models.
1 code implementation • 2 Feb 2023 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks.
1 code implementation • 1 Jun 2022 • Cyril Vallez, Andrei Kucharavy, Ljiljana Dolamic
The advent of the internet, followed shortly by the social media made it ubiquitous in consuming and sharing information between anyone with access to it.
1 code implementation • 11 Mar 2022 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
Recently, it has been shown that, in spite of the significant performance of deep neural networks in different fields, those are vulnerable to adversarial examples.
no code implementations • 9 Dec 2021 • Chi Thang Duong, Dimitri Percia David, Ljiljana Dolamic, Alain Mermoud, Vincent Lenders, Karl Aberer
This is a two-task setup involving (i) technology classification of entities extracted from company corpus, and (ii) technology and company retrieval based on classified technologies.
no code implementations • 29 Sep 2021 • Andrei Kucharavy, Ljiljana Dolamic, Rachid Guerraoui
Be it in natural language generation or in the image generation, massive performances gains have been achieved in the last years.