Search Results for author: Omar Montasser

Found 14 papers, 0 papers with code

Theoretical Foundations of Adversarially Robust Learning

no code implementations13 Jun 2023 Omar Montasser

In this thesis, we explore what robustness properties can we hope to guarantee against adversarial examples and develop an understanding of how to algorithmically guarantee them.

Strategic Classification under Unknown Personalized Manipulation

no code implementations NeurIPS 2023 Han Shao, Avrim Blum, Omar Montasser

Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.

Classification

Agnostic Multi-Robust Learning Using ERM

no code implementations15 Mar 2023 Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl

A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.

Image Classification

Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

no code implementations15 Sep 2022 Omar Montasser, Steve Hanneke, Nathan Srebro

We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time.

A Theory of PAC Learnability under Transformation Invariances

no code implementations15 Feb 2022 Han Shao, Omar Montasser, Avrim Blum

One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.

Data Augmentation Image Classification

Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness

no code implementations11 Feb 2022 Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang

We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners.

Adversarial Robustness

Transductive Robust Learning Guarantees

no code implementations20 Oct 2021 Omar Montasser, Steve Hanneke, Nathan Srebro

We study the problem of adversarially robust learning in the transductive setting.

Adversarially Robust Learning with Unknown Perturbation Sets

no code implementations3 Feb 2021 Omar Montasser, Steve Hanneke, Nathan Srebro

We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions.

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

no code implementations NeurIPS 2020 Omar Montasser, Steve Hanneke, Nathan Srebro

We study the problem of reducing adversarially robust learning to standard PAC learning, i. e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner.

PAC learning

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

no code implementations NeurIPS 2020 Shafi Goldwasser, Adam Tauman Kalai, Yael Tauman Kalai, Omar Montasser

We present a transductive learning algorithm that takes as input training examples from a distribution $P$ and arbitrary (unlabeled) test examples, possibly chosen by an adversary.

Transductive Learning

Efficiently Learning Adversarially Robust Halfspaces with Noise

no code implementations ICML 2020 Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro

We study the problem of learning adversarially robust halfspaces in the distribution-independent setting.

Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

no code implementations9 Mar 2020 Pritish Kamath, Omar Montasser, Nathan Srebro

We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to approximate, rather then exactly represent, a given hypothesis class.

Predicting Demographics of High-Resolution Geographies with Geotagged Tweets

no code implementations22 Jan 2017 Omar Montasser, Daniel Kifer

For the task of predicting gender and race/ethnicity counts at the blockgroup-level, an approach adapted from prior work to our problem achieves an average correlation of 0. 389 (gender) and 0. 569 (race) on a held-out test dataset.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.