Search Results for author: Romain Egele

Found 9 papers, 1 papers with code

Optimizing Distributed Training on Frontier for Large Language Models

no code implementations20 Dec 2023 Sajal Dash, Isaac Lyngaas, Junqi Yin, Xiao Wang, Romain Egele, Guojing Cong, Feiyi Wang, Prasanna Balaprakash

For the training of the 175 Billion parameter model and the 1 Trillion parameter model, we achieved $100\%$ weak scaling efficiency on 1024 and 3072 MI250X GPUs, respectively.

Computational Efficiency

Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?

1 code implementation28 Jul 2023 Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash

Hyperparameter optimization (HPO) is crucial for fine-tuning machine learning models but can be computationally expensive.

Hyperparameter Optimization

Quantifying uncertainty for deep learning based forecasting and flow-reconstruction using neural architecture search ensembles

no code implementations20 Feb 2023 Romit Maulik, Romain Egele, Krishnan Raghavan, Prasanna Balaprakash

We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.

Bayesian Optimization Decision Making +3

HPC Storage Service Autotuning Using Variational-Autoencoder-Guided Asynchronous Bayesian Optimization

no code implementations3 Oct 2022 Matthieu Dorier, Romain Egele, Prasanna Balaprakash, Jaehoon Koo, Sandeep Madireddy, Srinivasan Ramesh, Allen D. Malony, Rob Ross

Distributed data storage services tailored to specific applications have grown popular in the high-performance computing (HPC) community as a way to address I/O and storage challenges.

Bayesian Optimization Transfer Learning

Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization

no code implementations1 Jul 2022 Romain Egele, Isabelle Guyon, Venkatram Vishwanath, Prasanna Balaprakash

Bayesian optimization (BO) is a promising approach for hyperparameter optimization of deep neural networks (DNNs), where each model training can take minutes to hours.

Bayesian Optimization Computational Efficiency +1

AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification

no code implementations26 Oct 2021 Romain Egele, Romit Maulik, Krishnan Raghavan, Bethany Lusch, Isabelle Guyon, Prasanna Balaprakash

However, building ensembles of neural networks is a challenging task because, in addition to choosing the right neural architecture or hyperparameters for each member of the ensemble, there is an added cost of training each model.

Uncertainty Quantification

AgEBO-Tabular: Joint Neural Architecture and Hyperparameter Search with Autotuned Data-Parallel Training for Tabular Data

no code implementations30 Oct 2020 Romain Egele, Prasanna Balaprakash, Venkatram Vishwanath, Isabelle Guyon, Zhengying Liu

Neural architecture search (NAS) is an AutoML approach that generates and evaluates multiple neural network architectures concurrently and improves the accuracy of the generated models iteratively.

Bayesian Optimization Neural Architecture Search

Scalable Reinforcement-Learning-Based Neural Architecture Search for Cancer Deep Learning Research

no code implementations1 Sep 2019 Prasanna Balaprakash, Romain Egele, Misha Salim, Stefan Wild, Venkatram Vishwanath, Fangfang Xia, Tom Brettin, Rick Stevens

Cancer is a complex disease, the understanding and treatment of which are being aided through increases in the volume of collected data and in the scale of deployed computing power.

Neural Architecture Search reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.