1 code implementation • 21 Jul 2024 • Zonghao Huang, Neil Zhenqiang Gong, Michael K. Reiter
Auditing the use of data in training machine-learning (ML) models is an increasingly pressing challenge, as myriad ML practitioners routinely leverage the effort of content creators to train models without their permission.
no code implementations • 10 May 2024 • Yujie Zhang, Neil Gong, Michael K. Reiter
To effectively conceal malicious model updates among benign ones, we propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger, making backdoor data have minimal effect on model updates.
no code implementations • 22 Feb 2024 • Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong
However, foundation models are vulnerable to backdoor attacks and a backdoored foundation model is a single-point-of-failure of the AI ecosystem, e. g., multiple downstream classifiers inherit the backdoor vulnerabilities simultaneously.
no code implementations • 3 Dec 2023 • Zonghao Huang, Neil Gong, Michael K. Reiter
Untrusted data used to train a model might have been manipulated to endow the learned model with hidden properties that the data contributor might later exploit.
1 code implementation • 29 Jun 2023 • Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif
In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks.
1 code implementation • 28 Dec 2021 • Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif
First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it.
no code implementations • ICLR 2022 • Christopher M Bender, Patrick Emmanuel, Michael K. Reiter, Junier Oliva
Neural networks have enabled learning over examples that contain thousands of dimensions.
no code implementations • ICML 2020 • Christopher M. Bender, Yang Li, Yifeng Shi, Michael K. Reiter, Junier B. Oliva
In this work we develop a novel Bayesian neural network methodology to achieve strong adversarial robustness without the need for online adversarial training.
1 code implementation • 19 Dec 2019 • Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, Saurabh Shintre
Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%.
no code implementations • 19 Dec 2019 • Mahmood Sharif, Lujo Bauer, Michael K. Reiter
This paper proposes a new defense called $n$-ML against adversarial examples, i. e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.
1 code implementation • 4 Apr 2018 • Guy Golan Gueta, Ittai Abraham, Shelly Grossman, Dahlia Malkhi, Benny Pinkas, Michael K. Reiter, Dragos-Adrian Seredinschi, Orr Tamir, Alin Tomescu
SBFT is a state of the art Byzantine fault tolerant permissioned blockchain system that addresses the challenges of scalability, decentralization and world-scale geo-replication.
Distributed, Parallel, and Cluster Computing
5 code implementations • 13 Mar 2018 • Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta, Ittai Abraham
We present HotStuff, a leader-based Byzantine fault-tolerant replication protocol for the partially synchronous model.
Distributed, Parallel, and Cluster Computing
no code implementations • 27 Feb 2018 • Mahmood Sharif, Lujo Bauer, Michael K. Reiter
Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.
3 code implementations • 31 Dec 2017 • Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains.
1 code implementation • 9 Sep 2016 • Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart
In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i. e., "steal") the model.