no code implementations • 18 Oct 2024 • Jessica Dai, Nika Haghtalab, Eric Zhao
A canonical desideratum for prediction problems is that performance guarantees should hold not just on average over the population, but also for meaningful subpopulations within the overall population.
no code implementations • 17 Oct 2024 • Emilio Calvano, Nika Haghtalab, Ellen Vitercik, Eric Zhao
The content selection problem of digital services is often modeled as a decision-process where a service chooses, over multiple rounds, an arm to pull from a set of arms that each return a certain reward.
no code implementations • 15 Aug 2024 • Nivasini Ananthakrishnan, Nika Haghtalab, Chara Podimata, Kunhe Yang
On the other hand, if both players start with some uncertainty about the game, the quality of information alone does not determine which agent can achieve her Stackelberg value.
no code implementations • 19 Jul 2024 • Nika Haghtalab, Mingda Qiao, Kunhe Yang, Eric Zhao
A calibration measure is said to be truthful if the forecaster (approximately) minimizes the expected penalty by predicting the conditional expectation of the next outcome, given the prior distribution of outcomes.
no code implementations • 28 Jun 2024 • Danny Halawi, Alexander Wei, Eric Wallace, Tony T. Wang, Nika Haghtalab, Jacob Steinhardt
Black-box finetuning is an emerging interface for adapting state-of-the-art language models to user needs.
no code implementations • 10 Jan 2024 • Jessica Dai, Bailey Flanigan, Nika Haghtalab, Meena Jagadeesan, Chara Podimata
A common explanation for negative user impacts of content recommender systems is misalignment between the platform's objective and user welfare.
no code implementations • 21 Sep 2023 • Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, Abhishek Shetty
We show that both weak and strong $\sigma$-smooth Nash equilibria have superior computational properties to Nash equilibria: when $\sigma$ as well as an approximation parameter $\epsilon$ and the number of players are all constants, there is a constant-time randomized algorithm to find a weak $\epsilon$-approximate $\sigma$-smooth Nash equilibrium in normal-form games.
no code implementations • 4 Sep 2023 • Nivasini Ananthakrishnan, Stephen Bates, Michael I. Jordan, Nika Haghtalab
Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.
no code implementations • 22 Jul 2023 • Pranjal Awasthi, Nika Haghtalab, Eric Zhao
Multi-distribution learning is a natural generalization of PAC learning to settings with multiple data distributions.
1 code implementation • NeurIPS 2023 • Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt, Nika Haghtalab
As the scale of machine learning models increases, trends such as scaling laws anticipate consistent downstream improvements in predictive accuracy.
no code implementations • 20 Feb 2023 • Wenshuo Guo, Nika Haghtalab, Kirthevasan Kandasamy, Ellen Vitercik
Customers with few relevant reviews may hesitate to make a purchase except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews so that buyers can confidently estimate their values.
1 code implementation • 22 Oct 2022 • Nika Haghtalab, Michael I. Jordan, Eric Zhao
This improves upon the best known sample complexity bounds for fair federated learning by Mohri et al. and collaborative learning by Nguyen and Zakynthinou by multiplicative factors of $n$ and $\log(n)/\epsilon^3$, respectively.
no code implementations • 30 Aug 2022 • Meena Jagadeesan, Michael I. Jordan, Nika Haghtalab
Nonetheless, the data sharing assumptions impact what mechanism drives misalignment and also affect the specific form of misalignment (e. g. the quality of the best-case and worst-case market outcomes).
1 code implementation • 19 Aug 2022 • Nika Haghtalab, Thodoris Lykouris, Sloan Nietert, Alexander Wei
Although learning in Stackelberg games is well-understood when the agent is myopic, dealing with non-myopic agents poses additional complications.
no code implementations • 26 May 2022 • Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Markus Mobius, Divyarthi Mohan
We study a communication game between a sender and a receiver.
no code implementations • 17 Feb 2022 • Nika Haghtalab, Yanjun Han, Abhishek Shetty, Kunhe Yang
For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS22].
1 code implementation • 4 Mar 2021 • Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.
no code implementations • 16 Feb 2021 • Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
-Online discrepancy minimization: We consider the online Koml\'os problem, where the input is generated from an adaptive sequence of $\sigma$-smooth and isotropic distributions on the $\ell_2$ unit ball.
no code implementations • 3 Nov 2020 • Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang
The goal is to design an evaluation mechanism that maximizes the overall quality score, i. e., welfare, in the population, taking any strategic updating into account.
no code implementations • 10 Oct 2020 • Maria-Florina Balcan, Nika Haghtalab
This chapter considers the computational and statistical aspects of learning linear thresholds in presence of noise.
no code implementations • NeurIPS 2020 • Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms.
no code implementations • NeurIPS 2019 • Nika Haghtalab, Cameron Musco, Bo Waggoner
We aim to understand this fact, taking an axiomatic approach to the design of loss functions for learning distributions.
no code implementations • NeurIPS 2017 • Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao
We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept.
no code implementations • NeurIPS 2017 • Ofer Dekel, Arthur Flajolet, Nika Haghtalab, Patrick Jaillet
We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round.
no code implementations • 11 Oct 2017 • Nika Haghtalab, Simon Mackenzie, Ariel D. Procaccia, Oren Salzman, Siddhartha S. Srinivasa
The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target.
Robotics Data Structures and Algorithms
no code implementations • 21 Mar 2017 • Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour
When a noticeable fraction of the labelers are perfect, and the rest behave arbitrarily, we show that any $\mathcal{F}$ that can be efficiently learned in the traditional realizable PAC model can be learned in a computationally efficient manner by querying the crowd, despite high amounts of noise in the responses.
no code implementations • 14 Mar 2017 • Nika Haghtalab, Ritesh Noothigattu, Ariel D. Procaccia
Voting systems typically treat all voters equally.
no code implementations • 5 Nov 2016 • Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan
We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.
no code implementations • 4 Nov 2016 • Avrim Blum, Nika Haghtalab
In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i. i. d.
no code implementations • 14 May 2015 • Maria-Florina Balcan, Nika Haghtalab, Colin White
In this work, we take this approach and provide strong positive results both for the asymmetric and symmetric $k$-center problems under a natural input stability (promise) condition called $\alpha$-perturbation resilience [Bilu and Linia 2012], which states that the optimal solution does not change under any alpha-factor perturbation to the input distances.
no code implementations • 12 Mar 2015 • Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner
We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in $\Re^d$, for some constant value of $\eta$.
no code implementations • NeurIPS 2014 • Avrim Blum, Nika Haghtalab, Ariel D. Procaccia
Game-theoretic algorithms for physical security have made an impressive real-world impact.