Search Results for author: Nika Haghtalab

Found 27 papers, 3 papers with code

Can Probabilistic Feedback Drive User Impacts in Online Platforms?

no code implementations10 Jan 2024 Jessica Dai, Bailey Flanigan, Nika Haghtalab, Meena Jagadeesan, Chara Podimata

A common explanation for negative user impacts of content recommender systems is misalignment between the platform's objective and user welfare.

Recommendation Systems

Smooth Nash Equilibria: Algorithms and Complexity

no code implementations21 Sep 2023 Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, Abhishek Shetty

We show that both weak and strong $\sigma$-smooth Nash equilibria have superior computational properties to Nash equilibria: when $\sigma$ as well as an approximation parameter $\epsilon$ and the number of players are all constants, there is a constant-time randomized algorithm to find a weak $\epsilon$-approximate $\sigma$-smooth Nash equilibrium in normal-form games.

Delegating Data Collection in Decentralized Machine Learning

no code implementations4 Sep 2023 Nivasini Ananthakrishnan, Stephen Bates, Michael I. Jordan, Nika Haghtalab

To address the lack of a priori knowledge regarding the optimal performance, we give a convex program that can adaptively and efficiently compute the optimal contract.

The Sample Complexity of Multi-Distribution Learning for VC Classes

no code implementations22 Jul 2023 Pranjal Awasthi, Nika Haghtalab, Eric Zhao

Multi-distribution learning is a natural generalization of PAC learning to settings with multiple data distributions.

PAC learning

Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition

1 code implementation NeurIPS 2023 Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt, Nika Haghtalab

As the scale of machine learning models increases, trends such as scaling laws anticipate consistent downstream improvements in predictive accuracy.

Leveraging Reviews: Learning to Price with Buyer and Seller Uncertainty

no code implementations20 Feb 2023 Wenshuo Guo, Nika Haghtalab, Kirthevasan Kandasamy, Ellen Vitercik

Customers with few relevant reviews may hesitate to make a purchase except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews so that buyers can confidently estimate their values.

On-Demand Sampling: Learning Optimally from Multiple Distributions

1 code implementation22 Oct 2022 Nika Haghtalab, Michael I. Jordan, Eric Zhao

This improves upon the best known sample complexity bounds for fair federated learning by Mohri et al. and collaborative learning by Nguyen and Zakynthinou by multiplicative factors of $n$ and $\log(n)/\epsilon^3$, respectively.

Fairness Federated Learning +1

Competition, Alignment, and Equilibria in Digital Marketplaces

no code implementations30 Aug 2022 Meena Jagadeesan, Michael I. Jordan, Nika Haghtalab

Nonetheless, the data sharing assumptions impact what mechanism drives misalignment and also affect the specific form of misalignment (e. g. the quality of the best-case and worst-case market outcomes).

Learning in Stackelberg Games with Non-myopic Agents

no code implementations19 Aug 2022 Nika Haghtalab, Thodoris Lykouris, Sloan Nietert, Alex Wei

Although learning in Stackelberg games is well-understood when the agent is myopic, non-myopic agents pose additional complications.

Communicating with Anecdotes

no code implementations26 May 2022 Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Markus Mobius, Divyarthi Mohan

We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world.

Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

no code implementations17 Feb 2022 Nika Haghtalab, Yanjun Han, Abhishek Shetty, Kunhe Yang

For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS22].

Transductive Learning

One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

1 code implementation4 Mar 2021 Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao

In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents.

Federated Learning

Smoothed Analysis with Adaptive Adversaries

no code implementations16 Feb 2021 Nika Haghtalab, Tim Roughgarden, Abhishek Shetty

-Online discrepancy minimization: We consider the online Koml\'os problem, where the input is generated from an adaptive sequence of $\sigma$-smooth and isotropic distributions on the $\ell_2$ unit ball.

Maximizing Welfare with Incentive-Aware Evaluation Mechanisms

no code implementations3 Nov 2020 Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang

The goal is to design an evaluation mechanism that maximizes the overall quality score, i. e., welfare, in the population, taking any strategic updating into account.

Noise in Classification

no code implementations10 Oct 2020 Maria-Florina Balcan, Nika Haghtalab

This chapter considers the computational and statistical aspects of learning linear thresholds in presence of noise.

Classification General Classification

Smoothed Analysis of Online and Differentially Private Learning

no code implementations NeurIPS 2020 Nika Haghtalab, Tim Roughgarden, Abhishek Shetty

Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms.

Toward a Characterization of Loss Functions for Distribution Learning

no code implementations NeurIPS 2019 Nika Haghtalab, Cameron Musco, Bo Waggoner

We aim to understand this fact, taking an axiomatic approach to the design of loss functions for learning distributions.

Density Estimation

Collaborative PAC Learning

no code implementations NeurIPS 2017 Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao

We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept.

PAC learning

Online Learning with a Hint

no code implementations NeurIPS 2017 Ofer Dekel, Arthur Flajolet, Nika Haghtalab, Patrick Jaillet

We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round.

The Provable Virtue of Laziness in Motion Planning

no code implementations11 Oct 2017 Nika Haghtalab, Simon Mackenzie, Ariel D. Procaccia, Oren Salzman, Siddhartha S. Srinivasa

The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target.

Robotics Data Structures and Algorithms

Efficient PAC Learning from the Crowd

no code implementations21 Mar 2017 Pranjal Awasthi, Avrim Blum, Nika Haghtalab, Yishay Mansour

When a noticeable fraction of the labelers are perfect, and the rest behave arbitrarily, we show that any $\mathcal{F}$ that can be efficiently learned in the traditional realizable PAC model can be learned in a computationally efficient manner by querying the crowd, despite high amounts of noise in the responses.

Computational Efficiency PAC learning

Oracle-Efficient Online Learning and Auction Design

no code implementations5 Nov 2016 Miroslav Dudík, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, Jennifer Wortman Vaughan

We consider the design of computationally efficient online learning algorithms in an adversarial setting in which the learner has access to an offline optimization oracle.

Generalized Topic Modeling

no code implementations4 Nov 2016 Avrim Blum, Nika Haghtalab

In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i. i. d.

Topic Models

$k$-center Clustering under Perturbation Resilience

no code implementations14 May 2015 Maria-Florina Balcan, Nika Haghtalab, Colin White

In this work, we take this approach and provide strong positive results both for the asymmetric and symmetric $k$-center problems under a natural input stability (promise) condition called $\alpha$-perturbation resilience [Bilu and Linia 2012], which states that the optimal solution does not change under any alpha-factor perturbation to the input distances.

Clustering

Efficient Learning of Linear Separators under Bounded Noise

no code implementations12 Mar 2015 Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner

We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in $\Re^d$, for some constant value of $\eta$.

Active Learning Learning Theory

Learning Optimal Commitment to Overcome Insecurity

no code implementations NeurIPS 2014 Avrim Blum, Nika Haghtalab, Ariel D. Procaccia

Game-theoretic algorithms for physical security have made an impressive real-world impact.

Cannot find the paper you are looking for? You can Submit a new open access paper.