Search Results for author: Mohammad Mahmoody

Found 16 papers, 2 papers with code

Publicly Detectable Watermarking for Language Models

no code implementations27 Oct 2023 Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang

We construct the first provable watermarking scheme for language models with public detectability or verifiability: we use a private key for watermarking and a public key for watermark detection.

On Optimal Learning Under Targeted Data Poisoning

no code implementations6 Oct 2022 Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran

In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings.

Data Poisoning

Overparameterization from Computational Constraints

no code implementations27 Aug 2022 Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang

In particular, for computationally bounded learners, we extend the recent result of Bubeck and Sellke [NeurIPS'2021] which shows that robust models might need more parameters, to the computational regime and show that bounded learners could provably need an even larger number of parameters.

Learning and Certification under Instance-targeted Poisoning

no code implementations18 May 2021 Ji Gao, Amin Karbasi, Mohammad Mahmoody

In this paper, we study PAC learnability and certification of predictions under instance-targeted poisoning attacks, where the adversary who knows the test instance may change a fraction of the training set with the goal of fooling the learner at the test instance.

PAC learning

Computational Concentration of Measure: Optimal Bounds, Reductions, and More

no code implementations11 Jul 2019 Omid Etesami, Saeed Mahloujifar, Mohammad Mahmoody

Product measures of dimension $n$ are known to be concentrated in Hamming distance: for any set $S$ in the product space of probability $\epsilon$, a random point in the space, with probability $1-\delta$, has a neighbor in $S$ that is different from the original point in only $O(\sqrt{n\ln(1/(\epsilon\delta))})$ coordinates.

Open-Ended Question Answering

Lower Bounds for Adversarially Robust PAC Learning

no code implementations13 Jun 2019 Dimitrios I. Diochnos, Saeed Mahloujifar, Mohammad Mahmoody

In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversary's goal is to \emph{misclassify} the adversarially perturbed sample point $\widetilde{x}$, i. e., $h(\widetilde{x})\neq c(\widetilde{x})$, where $c$ is the ground truth concept and $h$ is the learned hypothesis.

PAC learning

Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

1 code implementation NeurIPS 2019 Saeed Mahloujifar, Xiao Zhang, Mohammad Mahmoody, David Evans

Many recent works have shown that adversarial examples that fool classifiers can be found by minimally perturbing a normal input.

Image Classification

Adversarially Robust Learning Could Leverage Computational Hardness

no code implementations28 May 2019 Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody

On the reverse directions, we also show that the existence of such learning task in which computational robustness beats information theoretic robustness requires computational hardness by implying (average-case) hardness of NP.

Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution

no code implementations NeurIPS 2018 Dimitrios I. Diochnos, Saeed Mahloujifar, Mohammad Mahmoody

We study both "inherent" bounds that apply to any problem and any classifier for such a problem as well as bounds that apply to specific problems and specific hypothesis classes.

Can Adversarially Robust Learning Leverage Computational Hardness?

no code implementations2 Oct 2018 Saeed Mahloujifar, Mohammad Mahmoody

Making learners robust to adversarial perturbation at test time (i. e., evasion attacks) or training time (i. e., poisoning attacks) has emerged as a challenging task.

Universal Multi-Party Poisoning Attacks

no code implementations10 Sep 2018 Saeed Mahloujifar, Mohammad Mahmoody, Ameer Mohammed

In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties.

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

no code implementations9 Sep 2018 Saeed Mahloujifar, Dimitrios I. Diochnos, Mohammad Mahmoody

We show that if the metric probability space of the test instance is concentrated, any classifier with some initial constant error is inherently vulnerable to adversarial perturbations.

Learning under $p$-Tampering Attacks

no code implementations10 Nov 2017 Saeed Mahloujifar, Dimitrios I. Diochnos, Mohammad Mahmoody

They obtained $p$-tampering attacks that increase the error probability in the so called targeted poisoning model in which the adversary's goal is to increase the loss of the trained hypothesis over a particular test example.

PAC learning

Cannot find the paper you are looking for? You can Submit a new open access paper.