Search Results for author: Mathias Humbert

Found 10 papers, 5 papers with code

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

7 code implementations4 Jun 2018 Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

BIG-bench Machine Learning Inference Attack +1

When Machine Unlearning Jeopardizes Privacy

1 code implementation5 May 2020 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.

Inference Attack Machine Unlearning +1

Graph Unlearning

1 code implementation27 Mar 2021 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

In this paper, we propose GraphEraser, a novel machine unlearning framework tailored to graph data.

Machine Unlearning

On (The Lack Of) Location Privacy in Crowdsourcing Applications

1 code implementation15 Jan 2019 Spyros Boukoros, Mathias Humbert, Stefan Katzenbeisser, Carmela Troncoso

Crowdsourcing enables application developers to benefit from large and diverse datasets at a low cost.

Cryptography and Security

Data Poisoning Attacks Against Multimodal Encoders

1 code implementation30 Sep 2022 Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang

Extensive evaluations on different datasets and model architectures show that all three attacks can achieve significant attack performance while maintaining model utility in both visual and linguistic modalities.

Contrastive Learning Data Poisoning

Fine-Tuning Is All You Need to Mitigate Backdoor Attacks

no code implementations18 Dec 2022 Zeyang Sha, Xinlei He, Pascal Berrang, Mathias Humbert, Yang Zhang

Backdoor attacks represent one of the major threats to machine learning models.

Prioritizing Investments in Cybersecurity: Empirical Evidence from an Event Study on the Determinants of Cyberattack Costs

no code implementations7 Feb 2024 Daniel Celeny, Loïc Maréchal, Evgueni Rousselot, Alain Mermoud, Mathias Humbert

We find that the magnitude of abnormal returns around cyber incidents is on par with previous studies using newswire or alternative data to identify cyber incidents.

Cannot find the paper you are looking for? You can Submit a new open access paper.