Search Results for author: Avrim Blum

Found 40 papers, 5 papers with code

Dueling Optimization with a Monotone Adversary

no code implementations18 Nov 2023 Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang

We introduce and study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.

On the Vulnerability of Fairness Constrained Learning to Malicious Noise

no code implementations21 Jul 2023 Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin Stangl

For example, for Demographic Parity we show we can incur only a $\Theta(\alpha)$ loss in accuracy, where $\alpha$ is the malicious noise rate, matching the best possible even without fairness constraints.

Fairness

Strategic Classification under Unknown Personalized Manipulation

no code implementations NeurIPS 2023 Han Shao, Avrim Blum, Omar Montasser

Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.

Classification

Agnostic Multi-Robust Learning Using ERM

no code implementations15 Mar 2023 Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl

A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.

Image Classification

Fundamental Bounds on Online Strategic Classification

no code implementations23 Feb 2023 Saba Ahmadi, Avrim Blum, Kunhe Yang

For instance, whereas in the non-strategic case, a mistake bound of $\ln|H|$ is achievable via the halving algorithm when the target function belongs to a known class $H$, we show that no deterministic algorithm can achieve a mistake bound $o(\Delta)$ in the strategic setting, where $\Delta$ is the maximum degree of the manipulation graph (even when $|H|=O(\Delta)$).

Binary Classification Classification

Multi Stage Screening: Enforcing Fairness and Maximizing Efficiency in a Pre-Existing Pipeline

no code implementations14 Mar 2022 Avrim Blum, Kevin Stangl, Ali Vakilian

Even if the firm is required to interview all of those who pass the final round, the tests themselves could have the property that qualified individuals from some groups pass more easily than qualified individuals from others.

Fairness

Robustly-reliable learners under poisoning attacks

no code implementations8 Mar 2022 Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma

Remarkably, we provide a complete characterization of learnability in this setting, in particular, nearly-tight matching upper and lower bounds on the region that can be certified, as well as efficient algorithms for computing this region given an ERM oracle.

Data Poisoning

On classification of strategic agents who can both game and improve

no code implementations28 Feb 2022 Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita

For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting.

Setting Fair Incentives to Maximize Improvement

no code implementations28 Feb 2022 Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita

A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i. e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve.

Fairness

A Theory of PAC Learnability under Transformation Invariances

no code implementations15 Feb 2022 Han Shao, Omar Montasser, Avrim Blum

One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.

Data Augmentation Image Classification

Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness

no code implementations11 Feb 2022 Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang

We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners.

Adversarial Robustness

Excess Capacity and Backdoor Poisoning

1 code implementation NeurIPS 2021 Naren Sarayu Manoj, Avrim Blum

A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set.

Backdoor Attack Data Poisoning +1

One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

1 code implementation4 Mar 2021 Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao

In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.

Federated Learning

Robust learning under clean-label attack

no code implementations1 Mar 2021 Avrim Blum, Steve Hanneke, Jian Qian, Han Shao

We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) correctly-labeled examples to the training set to fool the algorithm into making mistakes on specific test instances at test time.

Data Poisoning PAC learning

Communication-Aware Collaborative Learning

no code implementations19 Dec 2020 Avrim Blum, Shelby Heinecke, Lev Reyzin

In this paper, we study collaborative PAC learning with the goal of reducing communication cost at essentially no penalty to the sample complexity.

Classification General Classification +1

Online Learning with Primary and Secondary Losses

no code implementations NeurIPS 2020 Avrim Blum, Han Shao

On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by $o(T)$ for any time interval.

Active Local Learning

no code implementations31 Aug 2020 Arturs Backurs, Avrim Blum, Neha Gupta

In particular, the number of label queries should be independent of the complexity of $H$, and the function $h$ should be well-defined, independent of $x$.

The Strategic Perceptron

no code implementations4 Aug 2020 Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita

The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier.

Position

Learning Complexity of Simulated Annealing

no code implementations6 Mar 2020 Avrim Blum, Chen Dan, Saeed Seddighin

A key component that plays a crucial role in the performance of simulated annealing is the criteria under which the temperature changes namely, the cooling schedule.

Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images

1 code implementation10 Feb 2020 Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang

We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the $\ell_p$ ball of radius $\epsilon$ when $p>2$.

Adversarial Robustness

Recovering from Biased Data: Can Fairness Constraints Improve Accuracy?

no code implementations2 Dec 2019 Avrim Blum, Kevin Stangl

Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers.

Fairness

Advancing subgroup fairness via sleeping experts

no code implementations18 Sep 2019 Avrim Blum, Thodoris Lykouris

We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup.

Fairness

On preserving non-discrimination when combining expert advice

no code implementations NeurIPS 2018 Avrim Blum, Suriya Gunasekar, Thodoris Lykouris, Nathan Srebro

We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions.

Decision Making

Collaborative PAC Learning

no code implementations NeurIPS 2017 Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao

We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept.

PAC learning

Active Tolerant Testing

no code implementations1 Nov 2017 Avrim Blum, Lunjia Hu

In this work, we give the first algorithms for tolerant testing of nontrivial classes in the active model: estimating the distance of a target function to a hypothesis class C with respect to some arbitrary distribution D, using only a small number of label queries to a polynomial-sized pool of unlabeled examples drawn from D. Specifically, we show that for the class D of unions of d intervals on the line, we can estimate the error rate of the best hypothesis in the class to an additive error epsilon from only $O(\frac{1}{\epsilon^6}\log \frac{1}{\epsilon})$ label queries to an unlabeled pool of size $O(\frac{d}{\epsilon^2}\log \frac{1}{\epsilon})$.

Lifelong Learning in Costly Feature Spaces

no code implementations30 Jun 2017 Maria-Florina Balcan, Avrim Blum, Vaishnavh Nagarajan

An important long-term goal in machine learning systems is to build learning agents that, like humans, can learn many tasks over their lifetime, and moreover use information from these tasks to improve their ability to do so efficiently.

Efficient PAC Learning from the Crowd

no code implementations21 Mar 2017 Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour

When a noticeable fraction of the labelers are perfect, and the rest behave arbitrarily, we show that any $\mathcal{F}$ that can be efficiently learned in the traditional realizable PAC model can be learned in a computationally efficient manner by querying the crowd, despite high amounts of noise in the responses.

Computational Efficiency PAC learning

Generalized Topic Modeling

no code implementations4 Nov 2016 Avrim Blum, Nika Haghtalab

In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i. i. d.

Topic Models

Sparse Approximation via Generating Point Sets

no code implementations9 Jul 2015 Avrim Blum, Sariel Har-Peled, Benjamin Raichel

]{#1\left({#2}\right)} \newcommand{\npoints}{n} \newcommand{\ballD}{\mathsf{b}} \newcommand{\dataset}{{P}} $ For a set $\dataset$ of $\npoints$ points in the unit ball $\ballD \subseteq \Re^d$, consider the problem of finding a small subset $\algset \subseteq \dataset$ such that its convex-hull $\eps$-approximates the convex-hull of the original set.

The Ladder: A Reliable Leaderboard for Machine Learning Competitions

no code implementations16 Feb 2015 Avrim Blum, Moritz Hardt

In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition.

BIG-bench Machine Learning

Learning Optimal Commitment to Overcome Insecurity

no code implementations NeurIPS 2014 Avrim Blum, Nika Haghtalab, Ariel D. Procaccia

Game-theoretic algorithms for physical security have made an impressive real-world impact.

Efficient Representations for Life-Long Learning and Autoencoding

no code implementations6 Nov 2014 Maria-Florina Balcan, Avrim Blum, Santosh Vempala

Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm.

Learning Mixtures of Ranking Models

no code implementations NeurIPS 2014 Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan

We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models.

Tensor Decomposition

Active Learning and Best-Response Dynamics

no code implementations NeurIPS 2014 Maria-Florina Balcan, Chris Berlind, Avrim Blum, Emma Cohen, Kaushik Patnaik, Le Song

We examine an important setting for engineered systems in which low-power distributed sensors are each making highly noisy measurements of some unknown target function.

Active Learning Denoising

Differentially Private Data Analysis of Social Networks via Restricted Sensitivity

no code implementations22 Aug 2012 Jeremiah Blocki, Avrim Blum, Anupam Datta, Or Sheffet

Specifically, given a query f and a hypothesis H about the structure of a dataset D, we show generically how to transform f into a new query f_H whose global sensitivity (over all datasets including those that do not satisfy H) matches the restricted sensitivity of the query f. Moreover, if the belief of the querier is correct (i. e., D is in H) then f_H(D) = f(D).

Cryptography and Security Social and Information Networks Physics and Society

Trading off Mistakes and Don't-Know Predictions

no code implementations NeurIPS 2010 Amin Sayedi, Morteza Zadimoghaddam, Avrim Blum

If the number of don't know predictions is forced to be zero, the model reduces to the well-known mistake-bound model introduced by Littlestone [Lit88].

Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model

1 code implementation15 Oct 2000 Avrim Blum, Adam Kalai, Hal Wasserman

Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.