Algorithm selection (AS) deals with selecting an algorithm from a fixed set of candidate algorithms most suitable for a specific instance of an algorithmic problem, e. g., choosing solvers for SAT problems.
Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities.
Despite the recent progress in hyperparameter optimization (HPO), available benchmarks that resemble real-world scenarios consist of a few and very large problem instances that are expensive to solve.
Our results should greatly extend the applicability of SAEs in extracting latent dynamics from sparse, multidimensional data, such as neural population spiking activity.
The leading approaches in language modeling are all obsessed with TV shows of my youth - namely Transformers and Sesame Street.
FLO has a strong anytime performance and significantly outperforms Bayesian Optimization and random search for hyperparameter tuning on a large open source AutoML Benchmark.
Tuning machine learning models at scale, especially finding the right hyperparameter values, can be difficult and time-consuming.
Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.
Hyperparameter optimization for machine learning models is typically carried out by some sort of cross-validation procedure or global optimization, both of which require running the learning algorithm numerous times.