Search Results for author: Jacob Abernethy

Found 28 papers, 2 papers with code

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization

no code implementations22 Nov 2021 Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy

We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.

Escaping Saddle Points Faster with Stochastic Momentum

no code implementations ICLR 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e. g. ADAM [KB15], AMSGrad [RKK18], etc.

Stochastic Optimization

A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

no code implementations1 Mar 2021 Jacob Abernethy, Pranjal Awasthi, Satyen Kale

This apparent lack of robustness has led researchers to propose methods that can help to prevent an adversary from having such capabilities.

Adversarial Robustness Object Recognition

Linear Separation via Optimism

no code implementations17 Nov 2020 Rafael Hanashiro, Jacob Abernethy

Binary linear classification has been explored since the very early days of the machine learning literature.

A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network

no code implementations4 Oct 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of $(1-\Theta(\frac{1}{\sqrt{\kappa'}}))^t$.

Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization

no code implementations4 Oct 2020 Jun-Kun Wang, Jacob Abernethy

The Heavy Ball Method, proposed by Polyak over five decades ago, is a first-order method for optimizing continuous functions.

Online Kernel based Generative Adversarial Networks

no code implementations19 Jun 2020 Yeojoon Youn, Neil Thistlethwaite, Sang Keun Choe, Jacob Abernethy

We propose a novel approach that resolves many of these issues by relying on a kernel-based non-parametric discriminator that is highly amenable to online training---we call this the Online Kernel-based Generative Adversarial Networks (OKGAN).

Active Sampling for Min-Max Fairness

no code implementations11 Jun 2020 Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang

We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model that is learned via loss minimization.

Fairness

Competing Against Equilibria in Zero-Sum Games with Evolving Payoffs

1 code implementation17 Jul 2019 Adrian Rivera Cardoso, Jacob Abernethy, He Wang, Huan Xu

Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program.

Last-iterate convergence rates for min-max optimization

no code implementations ICLR 2020 Jacob Abernethy, Kevin A. Lai, Andre Wibisono

While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees.

Acceleration through Optimistic No-Regret Dynamics

no code implementations NeurIPS 2018 Jun-Kun Wang, Jacob Abernethy

In this paper we show that the technique can be enhanced to a rate of $O(1/T^2)$ by extending recent work \cite{RS13, SALS15} that leverages \textit{optimistic learning} to speed up equilibrium computation.

online learning

ActiveRemediation: The Search for Lead Pipes in Flint, Michigan

no code implementations10 Jun 2018 Jacob Abernethy, Alex Chojnacki, Arya Farahi, Eric Schwartz, Jared Webb

We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals.

Faster Rates for Convex-Concave Games

no code implementations17 May 2018 Jacob Abernethy, Kevin A. Lai, Kfir. Y. Levy, Jun-Kun Wang

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games.

Online Learning via the Differential Privacy Lens

no code implementations NeurIPS 2019 Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari

In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.

Multi-Armed Bandits online learning

On Convergence and Stability of GANs

8 code implementations ICLR 2018 Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira

We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions.

Flint Water Crisis: Data-Driven Risk Assessment Via Residential Water Testing

no code implementations30 Sep 2016 Jacob Abernethy, Cyrus Anderson, Chengyu Dai, Arya Farahi, Linh Nguyen, Adam Rauh, Eric Schwartz, Wenbo Shen, Guangsha Shi, Jonathan Stroud, Xinyu Tan, Jared Webb, Sheng Yang

In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water.

Fighting Bandits with a New Kind of Smoothness

no code implementations NeurIPS 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing.

Spectral Smoothing via Random Matrix Perturbations

no code implementations10 Jul 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning.

online learning

Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

no code implementations9 Jul 2015 Jacob Abernethy, Elad Hazan

We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function.

Low-Cost Learning via Active Data Procurement

no code implementations20 Feb 2015 Jacob Abernethy, Yi-Ling Chen, Chien-Ju Ho, Bo Waggoner

Our results in a sense parallel classic sample complexity guarantees, but with the key resource being money rather than quantity of data: With a budget constraint $B$, we give robust risk (predictive error) bounds on the order of $1/\sqrt{B}$.

Online Linear Optimization via Smoothing

no code implementations23 May 2014 Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari

We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization.

Information Aggregation in Exponential Family Markets

no code implementations22 Feb 2014 Jacob Abernethy, Sindhu Kutty, Sébastien Lahaie, Rahul Sami

We consider the design of prediction market mechanisms known as automated market makers.

Adaptive Market Making via Online Learning

no code implementations NeurIPS 2013 Jacob Abernethy, Satyen Kale

We consider the design of strategies for \emph{market making} in a market like a stock, commodity, or currency exchange.

online learning

Minimax Optimal Algorithms for Unconstrained Linear Optimization

no code implementations NeurIPS 2013 Brendan Mcmahan, Jacob Abernethy

We design and analyze minimax-optimal algorithms for online linear optimization games where the player's choice is unconstrained.

How to Hedge an Option Against an Adversary: Black-Scholes Pricing is Minimax Optimal

no code implementations NeurIPS 2013 Jacob Abernethy, Peter L. Bartlett, Rafael Frongillo, Andre Wibisono

We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor.

online learning

Cannot find the paper you are looking for? You can Submit a new open access paper.