no code implementations • 25 Sep 2023 • Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob Abernethy, Molei Tao
In the context of Euclidean space, it is established that the last-iterates of both the extragradient (EG) and past extragradient (PEG) methods converge to the solution of monotone variational inequality problems at a rate of $O\left(\frac{1}{\sqrt{T}}\right)$ (Cai et al., 2022).
no code implementations • 15 Jul 2023 • Yinglun Xu, Bhuvesh Kumar, Jacob Abernethy
Efficient learning in multi-armed bandit mechanisms such as pay-per-click (PPC) auctions typically involves three challenges: 1) inducing truthful bidding behavior (incentives), 2) using personalization in the users (context), and 3) circumventing manipulations in click patterns (corruptions).
no code implementations • 20 Jun 2023 • Yeojoon Youn, Zihao Hu, Juba Ziani, Jacob Abernethy
To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.
no code implementations • 30 May 2023 • Zihao Hu, Guanghui Wang, Jacob Abernethy
The projection operation is a critical component in a wide range of optimization algorithms, such as online gradient descent (OGD), for enforcing constraints and achieving optimal regret bounds.
no code implementations • 27 May 2023 • Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy
In contrast, generic optimization methods, such as mirror descent and steepest descent, have been shown to converge to maximal margin classifiers defined by alternative geometries.
no code implementations • 26 May 2023 • Jacob Abernethy, Alekh Agarwal, Teodor V. Marinov, Manfred K. Warmuth
We study the phenomenon of \textit{in-context learning} (ICL) exhibited by large language models, where they can adapt to a new learning task, given a handful of labeled examples, without any explicit parameter optimization.
no code implementations • 17 Feb 2023 • Zihao Hu, Guanghui Wang, Jacob Abernethy
In this paper, we consider the sequential decision problem where the goal is to minimize the general dynamic regret on a complete Riemannian manifold.
no code implementations • 17 Oct 2022 • Guanghui Wang, Rafael Hanashiro, Etash Guha, Jacob Abernethy
The classical Perceptron algorithm of Rosenblatt can be used to find a linear threshold function to correctly classify $n$ linearly separable data points, assuming the classes are separated by some margin $\gamma > 0$.
no code implementations • 17 Oct 2022 • Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy
The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale.
no code implementations • 22 Nov 2021 • Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy
We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.
no code implementations • ICLR 2020 • Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy
At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e. g. ADAM [KB15], AMSGrad [RKK18], etc.
no code implementations • 1 Mar 2021 • Jacob Abernethy, Pranjal Awasthi, Satyen Kale
This apparent lack of robustness has led researchers to propose methods that can help to prevent an adversary from having such capabilities.
no code implementations • 17 Nov 2020 • Rafael Hanashiro, Jacob Abernethy
Binary linear classification has been explored since the very early days of the machine learning literature.
no code implementations • 4 Oct 2020 • Jun-Kun Wang, Jacob Abernethy
Over-parametrization has become a popular technique in deep learning.
no code implementations • 4 Oct 2020 • Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy
Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of $(1-\Theta(\frac{1}{\sqrt{\kappa'}}))^t$.
no code implementations • 4 Oct 2020 • Jun-Kun Wang, Jacob Abernethy
The Heavy Ball Method, proposed by Polyak over five decades ago, is a first-order method for optimizing continuous functions.
no code implementations • 19 Jun 2020 • Yeojoon Youn, Neil Thistlethwaite, Sang Keun Choe, Jacob Abernethy
We propose a novel approach that resolves many of these issues by relying on a kernel-based non-parametric discriminator that is highly amenable to online training---we call this the Online Kernel-based Generative Adversarial Networks (OKGAN).
1 code implementation • 11 Jun 2020 • Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang
We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization.
1 code implementation • 17 Jul 2019 • Adrian Rivera Cardoso, Jacob Abernethy, He Wang, Huan Xu
Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program.
no code implementations • ICLR 2020 • Jacob Abernethy, Kevin A. Lai, Andre Wibisono
While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees.
no code implementations • NeurIPS 2018 • Jun-Kun Wang, Jacob Abernethy
In this paper we show that the technique can be enhanced to a rate of $O(1/T^2)$ by extending recent work \cite{RS13, SALS15} that leverages \textit{optimistic learning} to speed up equilibrium computation.
no code implementations • 10 Jun 2018 • Jacob Abernethy, Alex Chojnacki, Arya Farahi, Eric Schwartz, Jared Webb
We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals.
no code implementations • 17 May 2018 • Jacob Abernethy, Kevin A. Lai, Kfir. Y. Levy, Jun-Kun Wang
We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games.
no code implementations • NeurIPS 2019 • Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari
In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.
no code implementations • 5 Jul 2017 • Alex Chojnacki, Chengyu Dai, Arya Farahi, Guangsha Shi, Jared Webb, Daniel T. Zhang, Jacob Abernethy, Eric Schwartz
This is the nation's largest dataset on lead in a municipal water system.
8 code implementations • ICLR 2018 • Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions.
no code implementations • 30 Sep 2016 • Jacob Abernethy, Cyrus Anderson, Alex Chojnacki, Chengyu Dai, John Dryden, Eric Schwartz, Wenbo Shen, Jonathan Stroud, Laura Wendlandt, Sheng Yang, Daniel Zhang
Performing arts organizations aim to enrich their communities through the arts.
no code implementations • 30 Sep 2016 • Jacob Abernethy, Cyrus Anderson, Chengyu Dai, Arya Farahi, Linh Nguyen, Adam Rauh, Eric Schwartz, Wenbo Shen, Guangsha Shi, Jonathan Stroud, Xinyu Tan, Jared Webb, Sheng Yang
In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water.
no code implementations • NeurIPS 2015 • Jacob Abernethy, Chansoo Lee, Ambuj Tewari
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing.
no code implementations • 10 Jul 2015 • Jacob Abernethy, Chansoo Lee, Ambuj Tewari
Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning.
no code implementations • 9 Jul 2015 • Jacob Abernethy, Elad Hazan
We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function.
no code implementations • 20 Feb 2015 • Jacob Abernethy, Yi-Ling Chen, Chien-Ju Ho, Bo Waggoner
Our results in a sense parallel classic sample complexity guarantees, but with the key resource being money rather than quantity of data: With a budget constraint $B$, we give robust risk (predictive error) bounds on the order of $1/\sqrt{B}$.
no code implementations • 23 May 2014 • Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari
We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization.
no code implementations • 22 Feb 2014 • Jacob Abernethy, Sindhu Kutty, Sébastien Lahaie, Rahul Sami
We consider the design of prediction market mechanisms known as automated market makers.
no code implementations • NeurIPS 2013 • Jacob Abernethy, Peter L. Bartlett, Rafael Frongillo, Andre Wibisono
We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor.
no code implementations • NeurIPS 2013 • Jacob Abernethy, Satyen Kale
We consider the design of strategies for \emph{market making} in a market like a stock, commodity, or currency exchange.
no code implementations • NeurIPS 2013 • Brendan Mcmahan, Jacob Abernethy
We design and analyze minimax-optimal algorithms for online linear optimization games where the player's choice is unconstrained.