Search Results for author: Greg Kochanski

Found 4 papers, 3 papers with code

Gradientless Descent: High-Dimensional Zeroth-Order Optimization

no code implementations ICLR 2020 Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi Zhang

Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.

Vocal Bursts Intensity Prediction

Towards Learning Universal Hyperparameter Optimizers with Transformers

1 code implementation26 May 2022 Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas

Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.

Hyperparameter Optimization Meta-Learning

Open Source Vizier: Distributed Infrastructure and API for Reliable and Flexible Blackbox Optimization

1 code implementation27 Jul 2022 Xingyou Song, Sagi Perel, Chansoo Lee, Greg Kochanski, Daniel Golovin

Vizier is the de-facto blackbox and hyperparameter optimization service across Google, having optimized some of Google's largest products and research efforts.

Hyperparameter Optimization Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.