Search Results for author: Chansoo Lee

Found 7 papers, 0 papers with code

Automatic prior selection for meta Bayesian optimization with a case study on tuning deep neural network optimizers

no code implementations16 Sep 2021 Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zelda Mariet, Zack Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani

The performance of deep neural networks can be highly sensitive to the choice of a variety of meta-parameters, such as optimizer parameters and model hyperparameters.

Gradientless Descent: High-Dimensional Zeroth-Order Optimization

no code implementations ICLR 2020 Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi Zhang

Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.

Online Learning via the Differential Privacy Lens

no code implementations NeurIPS 2019 Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari

In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.

Multi-Armed Bandits

Fighting Bandits with a New Kind of Smoothness

no code implementations NeurIPS 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing.

Hardness of Online Sleeping Combinatorial Optimization Problems

no code implementations NeurIPS 2016 Satyen Kale, Chansoo Lee, Dávid Pál

We show that several online combinatorial optimization problems that admit efficient no-regret algorithms become computationally hard in the sleeping setting where a subset of actions becomes unavailable in each round.

Combinatorial Optimization

Spectral Smoothing via Random Matrix Perturbations

no code implementations10 Jul 2015 Jacob Abernethy, Chansoo Lee, Ambuj Tewari

Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning.

Online Linear Optimization via Smoothing

no code implementations23 May 2014 Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari

We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization.

Cannot find the paper you are looking for? You can Submit a new open access paper.