Search Results for author: Michael K. Reiter

Found 13 papers, 7 papers with code

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

no code implementations22 Feb 2024 Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong

However, foundation models are vulnerable to backdoor attacks and a backdoored foundation model is a single-point-of-failure of the AI ecosystem, e. g., multiple downstream classifiers inherit the backdoor vulnerabilities simultaneously.

Mendata: A Framework to Purify Manipulated Training Data

no code implementations3 Dec 2023 Zonghao Huang, Neil Gong, Michael K. Reiter

Untrusted data used to train a model might have been manipulated to endow the learned model with hidden properties that the data contributor might later exploit.

Data Poisoning

Group-based Robustness: A General Framework for Customized Robustness in the Real World

1 code implementation29 Jun 2023 Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks.

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

1 code implementation28 Dec 2021 Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it.

Defense Through Diverse Directions

no code implementations ICML 2020 Christopher M. Bender, Yang Li, Yifeng Shi, Michael K. Reiter, Junier B. Oliva

In this work we develop a novel Bayesian neural network methodology to achieve strong adversarial robustness without the need for online adversarial training.

Adversarial Robustness

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

no code implementations19 Dec 2019 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

This paper proposes a new defense called $n$-ML against adversarial examples, i. e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.

General Classification

SBFT: a Scalable and Decentralized Trust Infrastructure

1 code implementation4 Apr 2018 Guy Golan Gueta, Ittai Abraham, Shelly Grossman, Dahlia Malkhi, Benny Pinkas, Michael K. Reiter, Dragos-Adrian Seredinschi, Orr Tamir, Alin Tomescu

SBFT is a state of the art Byzantine fault tolerant permissioned blockchain system that addresses the challenges of scalability, decentralization and world-scale geo-replication.

Distributed, Parallel, and Cluster Computing

HotStuff: BFT Consensus in the Lens of Blockchain

5 code implementations13 Mar 2018 Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta, Ittai Abraham

We present HotStuff, a leader-based Byzantine fault-tolerant replication protocol for the partially synchronous model.

Distributed, Parallel, and Cluster Computing

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

no code implementations27 Feb 2018 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.

Perceptual Distance

A General Framework for Adversarial Examples with Objectives

3 code implementations31 Dec 2017 Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains.

Face Recognition

Stealing Machine Learning Models via Prediction APIs

1 code implementation9 Sep 2016 Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart

In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i. e., "steal") the model.

BIG-bench Machine Learning Learning Theory +1

Cannot find the paper you are looking for? You can Submit a new open access paper.