Search Results for author: Adhyyan Narang

Found 7 papers, 1 papers with code

Interactive Combinatorial Bandits: Balancing Competitivity and Complementarity

no code implementations7 Jul 2022 Adhyyan Narang, Omid Sadeghi, Lillian J Ratliff, Maryam Fazel, Jeff Bilmes

We are motivated by applications where there is a natural complementarity between certain elements: e. g., in a movie recommendation system, watching the first movie in a series complements the experience of watching a second (and a third, etc.).

Movie Recommendation

Towards Sample-efficient Overparameterized Meta-learning

1 code implementation NeurIPS 2021 Yue Sun, Adhyyan Narang, Halil Ibrahim Gulluk, Samet Oymak, Maryam Fazel

Specifically, for (1), we first show that learning the optimal representation coincides with the problem of designing a task-aware regularization to promote inductive bias.

Few-Shot Learning Inductive Bias

Multiplayer Performative Prediction: Learning in Decision-Dependent Games

no code implementations10 Jan 2022 Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J. Ratliff

We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms, including repeated retraining and the repeated (stochastic) gradient method.

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

no code implementations NeurIPS 2021 Tanner Fiez, Lillian Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang

For the class of nonconvex-PL zero-sum games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee to a set of the strict local minmax equilibrium.

Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective

no code implementations27 Sep 2021 Adhyyan Narang, Vidya Muthukumar, Anant Sahai

We find that the learned model is susceptible to adversaries in an intermediate regime where classification generalizes but regression does not.

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

no code implementations NeurIPS 2021 Tanner Fiez, Lillian J Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang

For the class of nonconvex-PL zero-sum games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee to a set of the strict local minmax equilibrium.

Cannot find the paper you are looking for? You can Submit a new open access paper.