Search Results for author: Jacob Abernethy

Found 36 papers, 3 papers with code

Extragradient Type Methods for Riemannian Variational Inequality Problems

no code implementations25 Sep 2023 Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob Abernethy, Molei Tao

In the context of Euclidean space, it is established that the last-iterates of both the extragradient (EG) and past extragradient (PEG) methods converge to the solution of monotone variational inequality problems at a rate of $O\left(\frac{1}{\sqrt{T}}\right)$ (Cai et al., 2022).

On the Robustness of Epoch-Greedy in Multi-Agent Contextual Bandit Mechanisms

no code implementations15 Jul 2023 Yinglun Xu, Bhuvesh Kumar, Jacob Abernethy

Efficient learning in multi-armed bandit mechanisms such as pay-per-click (PPC) auctions typically involves three challenges: 1) inducing truthful bidding behavior (incentives), 2) using personalization in the users (context), and 3) circumventing manipulations in click patterns (corruptions).

Randomized Quantization is All You Need for Differential Privacy in Federated Learning

no code implementations20 Jun 2023 Yeojoon Youn, Zihao Hu, Juba Ziani, Jacob Abernethy

To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.

Federated Learning Quantization

On Riemannian Projection-free Online Learning

no code implementations30 May 2023 Zihao Hu, Guanghui Wang, Jacob Abernethy

The projection operation is a critical component in a wide range of optimization algorithms, such as online gradient descent (OGD), for enforcing constraints and achieving optimal regret bounds.

A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks

no code implementations26 May 2023 Jacob Abernethy, Alekh Agarwal, Teodor V. Marinov, Manfred K. Warmuth

We study the phenomenon of \textit{in-context learning} (ICL) exhibited by large language models, where they can adapt to a new learning task, given a handful of labeled examples, without any explicit parameter optimization.

In-Context Learning Retrieval

Minimizing Dynamic Regret on Geodesic Metric Spaces

no code implementations17 Feb 2023 Zihao Hu, Guanghui Wang, Jacob Abernethy

In this paper, we consider the sequential decision problem where the goal is to minimize the general dynamic regret on a complete Riemannian manifold.

Open-Ended Question Answering

On Accelerated Perceptrons and Beyond

no code implementations17 Oct 2022 Guanghui Wang, Rafael Hanashiro, Etash Guha, Jacob Abernethy

The classical Perceptron algorithm of Rosenblatt can be used to find a linear threshold function to correctly classify $n$ linearly separable data points, assuming the classes are separated by some margin $\gamma > 0$.

Adaptive Oracle-Efficient Online Learning

no code implementations17 Oct 2022 Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy

The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale.

Decision Making

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization

no code implementations22 Nov 2021 Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy

We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.

Escaping Saddle Points Faster with Stochastic Momentum

no code implementations ICLR 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

At the same time, a widely-observed empirical phenomenon is that in training deep networks stochastic momentum appears to significantly improve convergence time, variants of it have flourished in the development of other popular update methods, e. g. ADAM [KB15], AMSGrad [RKK18], etc.

Open-Ended Question Answering Stochastic Optimization

A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

no code implementations1 Mar 2021 Jacob Abernethy, Pranjal Awasthi, Satyen Kale

This apparent lack of robustness has led researchers to propose methods that can help to prevent an adversary from having such capabilities.

Adversarial Robustness Object Recognition

Linear Separation via Optimism

no code implementations17 Nov 2020 Rafael Hanashiro, Jacob Abernethy

Binary linear classification has been explored since the very early days of the machine learning literature.

Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization

no code implementations4 Oct 2020 Jun-Kun Wang, Jacob Abernethy

The Heavy Ball Method, proposed by Polyak over five decades ago, is a first-order method for optimizing continuous functions.

Retrieval

A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network

no code implementations4 Oct 2020 Jun-Kun Wang, Chi-Heng Lin, Jacob Abernethy

Our result shows that with the appropriate choice of parameters Polyak's momentum has a rate of $(1-\Theta(\frac{1}{\sqrt{\kappa'}}))^t$.

Online Kernel based Generative Adversarial Networks

no code implementations19 Jun 2020 Yeojoon Youn, Neil Thistlethwaite, Sang Keun Choe, Jacob Abernethy

We propose a novel approach that resolves many of these issues by relying on a kernel-based non-parametric discriminator that is highly amenable to online training---we call this the Online Kernel-based Generative Adversarial Networks (OKGAN).

Generative Adversarial Network

Active Sampling for Min-Max Fairness

1 code implementation11 Jun 2020 Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang

We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization.

Fairness regression

Competing Against Equilibria in Zero-Sum Games with Evolving Payoffs

1 code implementation17 Jul 2019 Adrian Rivera Cardoso, Jacob Abernethy, He Wang, Huan Xu

Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program.

Last-iterate convergence rates for min-max optimization

no code implementations ICLR 2020 Jacob Abernethy, Kevin A. Lai, Andre Wibisono

While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees.

Acceleration through Optimistic No-Regret Dynamics

no code implementations NeurIPS 2018 Jun-Kun Wang, Jacob Abernethy

In this paper we show that the technique can be enhanced to a rate of $O(1/T^2)$ by extending recent work \cite{RS13, SALS15} that leverages \textit{optimistic learning} to speed up equilibrium computation.

ActiveRemediation: The Search for Lead Pipes in Flint, Michigan

no code implementations10 Jun 2018 Jacob Abernethy, Alex Chojnacki, Arya Farahi, Eric Schwartz, Jared Webb

We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals.

Faster Rates for Convex-Concave Games

no code implementations17 May 2018 Jacob Abernethy, Kevin A. Lai, Kfir. Y. Levy, Jun-Kun Wang

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games.

Online Learning via the Differential Privacy Lens

no code implementations NeurIPS 2019 Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari

In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.

Multi-Armed Bandits

On Convergence and Stability of GANs

8 code implementations ICLR 2018 Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira

We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions.

Flint Water Crisis: Data-Driven Risk Assessment Via Residential Water Testing

no code implementations30 Sep 2016 Jacob Abernethy, Cyrus Anderson, Chengyu Dai, Arya Farahi, Linh Nguyen, Adam Rauh, Eric Schwartz, Wenbo Shen, Guangsha Shi, Jonathan Stroud, Xinyu Tan, Jared Webb, Sheng Yang

In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water.

Fighting Bandits with a New Kind of Smoothness

no code implementations NeurIPS 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing.

Spectral Smoothing via Random Matrix Perturbations

no code implementations10 Jul 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning.

Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

no code implementations9 Jul 2015 Jacob Abernethy, Elad Hazan

We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function.

Low-Cost Learning via Active Data Procurement

no code implementations20 Feb 2015 Jacob Abernethy, Yi-Ling Chen, Chien-Ju Ho, Bo Waggoner

Our results in a sense parallel classic sample complexity guarantees, but with the key resource being money rather than quantity of data: With a budget constraint $B$, we give robust risk (predictive error) bounds on the order of $1/\sqrt{B}$.

Online Linear Optimization via Smoothing

no code implementations23 May 2014 Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari

We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization.

Information Aggregation in Exponential Family Markets

no code implementations22 Feb 2014 Jacob Abernethy, Sindhu Kutty, Sébastien Lahaie, Rahul Sami

We consider the design of prediction market mechanisms known as automated market makers.

Adaptive Market Making via Online Learning

no code implementations NeurIPS 2013 Jacob Abernethy, Satyen Kale

We consider the design of strategies for \emph{market making} in a market like a stock, commodity, or currency exchange.

How to Hedge an Option Against an Adversary: Black-Scholes Pricing is Minimax Optimal

no code implementations NeurIPS 2013 Jacob Abernethy, Peter L. Bartlett, Rafael Frongillo, Andre Wibisono

We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor.

Minimax Optimal Algorithms for Unconstrained Linear Optimization

no code implementations NeurIPS 2013 Brendan Mcmahan, Jacob Abernethy

We design and analyze minimax-optimal algorithms for online linear optimization games where the player's choice is unconstrained.

Cannot find the paper you are looking for? You can Submit a new open access paper.