no code implementations • 1 Apr 2024 • Avrim Blum, Kavya Ravichandran
We give nearly-tight upper and lower bounds for the improving multi-armed bandits problem.
no code implementations • 18 Nov 2023 • Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang
We introduce and study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
no code implementations • 21 Jul 2023 • Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin Stangl
For example, for Demographic Parity we show we can incur only a $\Theta(\alpha)$ loss in accuracy, where $\alpha$ is the malicious noise rate, matching the best possible even without fairness constraints.
no code implementations • NeurIPS 2023 • Han Shao, Avrim Blum, Omar Montasser
Ball manipulations are a widely studied class of manipulations in the literature, where agents can modify their feature vector within a bounded radius ball.
no code implementations • 15 Mar 2023 • Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl
A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.
no code implementations • 23 Feb 2023 • Saba Ahmadi, Avrim Blum, Kunhe Yang
For instance, whereas in the non-strategic case, a mistake bound of $\ln|H|$ is achievable via the halving algorithm when the target function belongs to a known class $H$, we show that no deterministic algorithm can achieve a mistake bound $o(\Delta)$ in the strategic setting, where $\Delta$ is the maximum degree of the manipulation graph (even when $|H|=O(\Delta)$).
no code implementations • 14 Mar 2022 • Avrim Blum, Kevin Stangl, Ali Vakilian
Even if the firm is required to interview all of those who pass the final round, the tests themselves could have the property that qualified individuals from some groups pass more easily than qualified individuals from others.
no code implementations • 8 Mar 2022 • Maria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma
Remarkably, we provide a complete characterization of learnability in this setting, in particular, nearly-tight matching upper and lower bounds on the region that can be certified, as well as efficient algorithms for computing this region given an ERM oracle.
no code implementations • 28 Feb 2022 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
For the general discrete model, we give an efficient algorithm for the problem of maximizing the number of true positives subject to no false positives, and show how to extend this to a partial-information learning setting.
no code implementations • 28 Feb 2022 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i. e., adding a new target level may decrease the total amount of improvement as it may get easier for some agents to improve.
no code implementations • 15 Feb 2022 • Han Shao, Omar Montasser, Avrim Blum
One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal.
no code implementations • 11 Feb 2022 • Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners.
1 code implementation • NeurIPS 2021 • Naren Sarayu Manoj, Avrim Blum
A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set.
1 code implementation • 4 Mar 2021 • Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.
no code implementations • 1 Mar 2021 • Avrim Blum, Steve Hanneke, Jian Qian, Han Shao
We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) correctly-labeled examples to the training set to fool the algorithm into making mistakes on specific test instances at test time.
no code implementations • 19 Dec 2020 • Avrim Blum, Shelby Heinecke, Lev Reyzin
In this paper, we study collaborative PAC learning with the goal of reducing communication cost at essentially no penalty to the sample complexity.
no code implementations • NeurIPS 2020 • Avrim Blum, Han Shao
On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by $o(T)$ for any time interval.
1 code implementation • 13 Oct 2020 • Maria-Florina Balcan, Avrim Blum, Dravyansh Sharma, Hongyang Zhang
Despite significant advances, deep networks remain highly susceptible to adversarial attack.
no code implementations • 31 Aug 2020 • Arturs Backurs, Avrim Blum, Neha Gupta
In particular, the number of label queries should be independent of the complexity of $H$, and the function $h$ should be well-defined, independent of $x$.
no code implementations • 4 Aug 2020 • Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier.
no code implementations • 6 Mar 2020 • Avrim Blum, Chen Dan, Saeed Seddighin
A key component that plays a crucial role in the performance of simulated annealing is the criteria under which the temperature changes namely, the cooling schedule.
1 code implementation • 10 Feb 2020 • Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang
We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the $\ell_p$ ball of radius $\epsilon$ when $p>2$.
no code implementations • 2 Dec 2019 • Avrim Blum, Kevin Stangl
Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers.
no code implementations • 18 Sep 2019 • Avrim Blum, Thodoris Lykouris
We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup.
no code implementations • NeurIPS 2018 • Avrim Blum, Suriya Gunasekar, Thodoris Lykouris, Nathan Srebro
We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions.
no code implementations • NeurIPS 2017 • Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao
We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept.
no code implementations • 1 Nov 2017 • Avrim Blum, Lunjia Hu
In this work, we give the first algorithms for tolerant testing of nontrivial classes in the active model: estimating the distance of a target function to a hypothesis class C with respect to some arbitrary distribution D, using only a small number of label queries to a polynomial-sized pool of unlabeled examples drawn from D. Specifically, we show that for the class D of unions of d intervals on the line, we can estimate the error rate of the best hypothesis in the class to an additive error epsilon from only $O(\frac{1}{\epsilon^6}\log \frac{1}{\epsilon})$ label queries to an unlabeled pool of size $O(\frac{d}{\epsilon^2}\log \frac{1}{\epsilon})$.
no code implementations • 30 Jun 2017 • Maria-Florina Balcan, Avrim Blum, Vaishnavh Nagarajan
An important long-term goal in machine learning systems is to build learning agents that, like humans, can learn many tasks over their lifetime, and moreover use information from these tasks to improve their ability to do so efficiently.
no code implementations • 21 Mar 2017 • Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour
When a noticeable fraction of the labelers are perfect, and the rest behave arbitrarily, we show that any $\mathcal{F}$ that can be efficiently learned in the traditional realizable PAC model can be learned in a computationally efficient manner by querying the crowd, despite high amounts of noise in the responses.
no code implementations • 4 Nov 2016 • Avrim Blum, Nika Haghtalab
In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i. i. d.
no code implementations • 9 Jul 2015 • Avrim Blum, Sariel Har-Peled, Benjamin Raichel
]{#1\left({#2}\right)} \newcommand{\npoints}{n} \newcommand{\ballD}{\mathsf{b}} \newcommand{\dataset}{{P}} $ For a set $\dataset$ of $\npoints$ points in the unit ball $\ballD \subseteq \Re^d$, consider the problem of finding a small subset $\algset \subseteq \dataset$ such that its convex-hull $\eps$-approximates the convex-hull of the original set.
no code implementations • 16 Feb 2015 • Avrim Blum, Moritz Hardt
In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition.
no code implementations • NeurIPS 2014 • Avrim Blum, Nika Haghtalab, Ariel D. Procaccia
Game-theoretic algorithms for physical security have made an impressive real-world impact.
no code implementations • 6 Nov 2014 • Maria-Florina Balcan, Avrim Blum, Santosh Vempala
Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm.
no code implementations • NeurIPS 2014 • Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan
We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models.
no code implementations • NeurIPS 2014 • Maria-Florina Balcan, Chris Berlind, Avrim Blum, Emma Cohen, Kaushik Patnaik, Le Song
We examine an important setting for engineered systems in which low-power distributed sensors are each making highly noisy measurements of some unknown target function.
no code implementations • 22 Aug 2012 • Jeremiah Blocki, Avrim Blum, Anupam Datta, Or Sheffet
Specifically, given a query f and a hypothesis H about the structure of a dataset D, we show generically how to transform f into a new query f_H whose global sensitivity (over all datasets including those that do not satisfy H) matches the restricted sensitivity of the query f. Moreover, if the belief of the querier is correct (i. e., D is in H) then f_H(D) = f(D).
Cryptography and Security Social and Information Networks Physics and Society
no code implementations • NeurIPS 2010 • Amin Sayedi, Morteza Zadimoghaddam, Avrim Blum
If the number of don't know predictions is forced to be zero, the model reduces to the well-known mistake-bound model introduced by Littlestone [Lit88].
no code implementations • NeurIPS 2009 • Shobha Venkataraman, Avrim Blum, Dawn Song, Subhabrata Sen, Oliver Spatscheck
We formulate and address the problem of discovering dynamic malicious regions on the Internet.
1 code implementation • 15 Oct 2000 • Avrim Blum, Adam Kalai, Hal Wasserman
Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions.