Search Results for author: Adhyyan Narang

Found 7 papers, 1 papers with code

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

no code implementations NeurIPS 2021 Tanner Fiez, Lillian J Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang

For the class of nonconvex-PL zero-sum games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee to a set of the strict local minmax equilibrium.

Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective

no code implementations27 Sep 2021 Adhyyan Narang, Vidya Muthukumar, Anant Sahai

We find that the learned model is susceptible to adversaries in an intermediate regime where classification generalizes but regression does not.

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

no code implementations NeurIPS 2021 Tanner Fiez, Lillian Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang

For the class of nonconvex-PL zero-sum games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee to a set of the strict local minmax equilibrium.

Multiplayer Performative Prediction: Learning in Decision-Dependent Games

no code implementations10 Jan 2022 Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J. Ratliff

We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms, including repeated retraining and the repeated (stochastic) gradient method.

Towards Sample-efficient Overparameterized Meta-learning

1 code implementation NeurIPS 2021 Yue Sun, Adhyyan Narang, Halil Ibrahim Gulluk, Samet Oymak, Maryam Fazel

Specifically, for (1), we first show that learning the optimal representation coincides with the problem of designing a task-aware regularization to promote inductive bias.

Few-Shot Learning Inductive Bias

Online SuBmodular + SuPermodular (BP) Maximization with Bandit Feedback

no code implementations7 Jul 2022 Adhyyan Narang, Omid Sadeghi, Lillian J Ratliff, Maryam Fazel, Jeff Bilmes

At round $i$, a user with unknown utility $h_q$ arrives; the optimizer selects a new item to add to $S_q$, and receives a noisy marginal gain.

Computational Efficiency Movie Recommendation

Cannot find the paper you are looking for? You can Submit a new open access paper.