Search Results for author: Michael Backes

Found 38 papers, 9 papers with code

Finding MNEMON: Reviving Memories of Node Embeddings

no code implementations14 Apr 2022 Yun Shen, Yufei Han, Zhikun Zhang, Min Chen, Ting Yu, Michael Backes, Yang Zhang, Gianluca Stringhini

Previous security research efforts orbiting around graphs have been exclusively focusing on either (de-)anonymizing the graphs or understanding the security and privacy issues of graph neural networks.

Graph Embedding

Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders

no code implementations19 Jan 2022 Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang

Unsupervised representation learning techniques have been developing rapidly to make full use of unlabeled images.

Contrastive Learning Representation Learning

Get a Model! Model Hijacking Attack Against Machine Learning Models

no code implementations8 Nov 2021 Ahmed Salem, Michael Backes, Yang Zhang

In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack.

Autonomous Driving Data Poisoning

Inference Attacks Against Graph Neural Networks

1 code implementation6 Oct 2021 Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang

Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph.

Graph Classification Graph Embedding +1

BadNL: Backdoor Attacks Against NLP Models

no code implementations ICML Workshop AML 2021 Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Yang Zhang

For instance, using the Word-level triggers, our backdoor attack achieves a 100% attack success rate with only a utility drop of 0. 18%, 1. 26%, and 0. 19% on three benchmark sentiment analysis datasets.

Backdoor Attack Natural Language Processing +1

Mental Models of Adversarial Machine Learning

no code implementations8 May 2021 Lukas Bieringer, Kathrin Grosse, Michael Backes, Battista Biggio, Katharina Krombholz

Our study reveals two \facets of practitioners' mental models of machine learning security.

Graph Unlearning

no code implementations27 Mar 2021 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

In the context of machine learning (ML), it requires the ML model provider to remove the data subject's data from the training set used to build the ML model, a process known as \textit{machine unlearning}.

Node-Level Membership Inference Attacks Against Graph Neural Networks

no code implementations10 Feb 2021 Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang

To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

1 code implementation4 Feb 2021 Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang

As a result, we lack a comprehensive picture of the risks caused by the attacks, e. g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses.

Inference Attack Knowledge Distillation +1

Dynamic Backdoor Attacks Against Deep Neural Networks

no code implementations1 Jan 2021 Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang

In particular, BaN and c-BaN based on a novel generative network are the first two schemes that algorithmically generate triggers.

Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

no code implementations7 Oct 2020 Ahmed Salem, Michael Backes, Yang Zhang

In this paper, we present the first triggerless backdoor attack against deep neural networks, where the adversary does not need to modify the input for triggering the backdoor.

Backdoor Attack

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

no code implementations10 Sep 2020 Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang

One major privacy attack in this domain is membership inference, where an adversary aims to determine whether a target data sample is part of the training set of a target ML model.

Transfer Learning

Adversarial Examples and Metrics

no code implementations14 Jul 2020 Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy

In this work we study the limitations of robust classification if the target metric is uncertain.

Classification General Classification +1

How many winning tickets are there in one DNN?

no code implementations12 Jun 2020 Kathrin Grosse, Michael Backes

The recent lottery ticket hypothesis proposes that there is one sub-network that matches the accuracy of the original network when trained in isolation.

Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks

no code implementations11 Jun 2020 Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy

Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.

BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements

no code implementations1 Jun 2020 Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, Yang Zhang

In this paper, we perform a systematic investigation of backdoor attack on NLP models, and propose BadNL, a general NLP backdoor attack framework including novel attack methods.

Backdoor Attack Image Classification

Stealing Links from Graph Neural Networks

no code implementations5 May 2020 Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang

In this work, we propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.

Fraud Detection Recommendation Systems

When Machine Unlearning Jeopardizes Privacy

1 code implementation5 May 2020 Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.

Inference Attack Membership Inference Attack

Trollthrottle -- Raising the Cost of Astroturfing

3 code implementations19 Apr 2020 Ilkan Esiyok, Lucjan Hanzlik, Robert Kuennemann, Lena Marie Budde, Michael Backes

Astroturfing, i. e., the fabrication of public discourse by private or state-controlled sponsors via the creation of fake online accounts, has become incredibly widespread in recent years.

Cryptography and Security

Dynamic Backdoor Attacks Against Machine Learning Models

no code implementations7 Mar 2020 Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang

Triggers generated by our techniques can have random patterns and locations, which reduce the efficacy of the current backdoor detection mechanisms.

Backdoor Attack

Everything About You: A Multimodal Approach towards Friendship Inference in Online Social Networks

1 code implementation2 Mar 2020 Tahleen Rahman, Mario Fritz, Michael Backes, Yang Zhang

Most previous works in privacy of Online Social Networks (OSN) focus on a restricted scenario of using one type of information to infer another type of information or using only static profile data such as username, profile picture or home location.

Social and Information Networks

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

2 code implementations23 Sep 2019 Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

Inference Attack Membership Inference Attack

Adversarial Vulnerability Bounds for Gaussian Process Classification

no code implementations19 Sep 2019 Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Alvarez

To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification.

Classification General Classification

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

no code implementations1 Apr 2019 Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, Yang Zhang

As data generation is a continuous process, this leads to ML model owners updating their models frequently with newly-collected data in an online learning scenario.

online learning

On the security relevance of weights in deep learning

no code implementations8 Feb 2019 Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, Dietrich Klakow

Recently, a weight-based attack on stochastic gradient descent inducing overfitting has been proposed.

The Limitations of Model Uncertainty in Adversarial Settings

no code implementations6 Dec 2018 Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes

Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.

Gaussian Processes

Fidelius: Protecting User Secrets from Compromised Browsers

1 code implementation13 Sep 2018 Saba Eskandarian, Jonathan Cogan, Sawyer Birnbaum, Peh Chang Wei Brandon, Dillon Franke, Forest Fraser, Gaspar Garcia Jr., Eric Gong, Hung T. Nguyen, Taresh K. Sethi, Vishal Subbiah, Michael Backes, Giancarlo Pellegrino, Dan Boneh

In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker.

Cryptography and Security

MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

no code implementations1 Aug 2018 Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, Mario Fritz

In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service.

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

6 code implementations4 Jun 2018 Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

Inference Attack Membership Inference Attack

Towards Automated Network Mitigation Analysis (extended)

no code implementations15 May 2017 Patrick Speicher, Marcel Steinmetz, Jörg Hoffmann, Michael Backes, Robert Künnemann

Penetration testing is a well-established practical concept for the identification of potentially exploitable security weaknesses and an important component of a security audit.

On the (Statistical) Detection of Adversarial Examples

no code implementations21 Feb 2017 Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel

Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs.

Malware Classification Network Intrusion Detection

Computational Soundness for Dalvik Bytecode

no code implementations15 Aug 2016 Michael Backes, Robert Künnemann, Esfandiar Mohammadi

Second, we show that our abstractions are faithful by providing the first computational soundness result for Dalvik bytecode, i. e., the absence of attacks against our symbolically abstracted program entails the absence of any attacks against a suitable cryptographic program realization.

Cryptography and Security

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

no code implementations14 Jun 2016 Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel

Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.