no code implementations • 29 Nov 2023 • Juan Pablo García Amboage, Eric Wulff, Maria Girone, Tomás F. Pena
Hyperparameter Optimization (HPO) of Deep Learning-based models tends to be a compute resource intensive process as it usually requires to train the target model with many different hyperparameter configurations.
no code implementations • 27 Mar 2023 • Eric Wulff, Maria Girone, David Southwick, Juan Pablo García Amboage, Eduard Cuba
Training and Hyperparameter Optimization (HPO) of deep learning-based AI models are often compute resource intensive and calls for the use of large-scale distributed resources as well as scalable and resource efficient hyperparameter search algorithms.