Search Results for author: Pascal Berrang

Found 4 papers, 3 papers with code

Fine-Tuning Is All You Need to Mitigate Backdoor Attacks

no code implementations18 Dec 2022 Zeyang Sha, Xinlei He, Pascal Berrang, Mathias Humbert, Yang Zhang

Backdoor attacks represent one of the major threats to machine learning models.

Data Poisoning Attacks Against Multimodal Encoders

1 code implementation30 Sep 2022 Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang

Extensive evaluations on different datasets and model architectures show that all three attacks can achieve significant attack performance while maintaining model utility in both visual and linguistic modalities.

Contrastive Learning Data Poisoning

Albatross: An optimistic consensus algorithm

1 code implementation4 Mar 2019 Bruno França, Marvin Wissfeld, Pascal Berrang, Philipp von Styp-Rekowsky, Reto Trinkler

In this paper, we introduce Albatross, a Proof-of-Stake (PoS) blockchain consensus algorithm that aims to combine the best of both worlds.

Cryptography and Security

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

7 code implementations4 Jun 2018 Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

BIG-bench Machine Learning Inference Attack +1

Cannot find the paper you are looking for? You can Submit a new open access paper.