no code implementations • 11 Dec 2024 • Zaid Abulawi, Rui Hu, Prasanna Balaprakash, Yang Liu
Accurate predictions and uncertainty quantification (UQ) are essential for decision-making in risk-sensitive fields such as system safety modeling.
no code implementations • 19 Oct 2024 • Julian Barra, Shayan Shahbazi, Anthony Birri, Rajni Chahal, Ibrahim Isah, Muhammad Nouman Anwar, Tyler Starkus, Prasanna Balaprakash, Stephen Lam
Optimally designing molten salt applications requires knowledge of their thermophysical properties, but existing databases are incomplete, and experiments are challenging.
1 code implementation • 24 Jul 2024 • Hongwei Jin, George Papadimitriou, Krishnan Raghavan, Pawel Zuk, Prasanna Balaprakash, Cong Wang, Anirban Mandal, Ewa Deelman
Anomaly detection in computational workflows is critical for ensuring system reliability and security.
no code implementations • 12 Jun 2024 • Massimiliano Lupo Pasini, Jong Youl Choi, Kshitij Mehta, Pei Zhang, David Rogers, Jonghyun Bae, Khaled Z. Ibrahim, Ashwin M. Aji, Karl W. Schulz, Jorda Polo, Prasanna Balaprakash
The HydraGNN architecture enables the GFM to achieve near-linear strong scaling performance using more than 2, 000 GPUs on Perlmutter and 16, 000 GPUs on Frontier.
1 code implementation • 16 May 2024 • Hongwei Jin, Prasanna Balaprakash, Allen Zou, Pieter Ghysels, Aditi S. Krishnapriyan, Adam Mate, Arthur Barnes, Russell Bent
The threat of geomagnetic disturbances (GMDs) to the reliable operation of the bulk energy system has spurred the development of effective strategies for mitigating their impacts.
no code implementations • 23 Apr 2024 • Xiao Wang, Siyan Liu, Aristeidis Tsaris, Jong-Youl Choi, Ashwin Aji, Ming Fan, Wei zhang, Junqi Yin, Moetasim Ashfaq, Dan Lu, Prasanna Balaprakash
As the largest model of its kind, ORBIT surpasses the current climate AI foundation model size by a thousandfold.
no code implementations • 16 Apr 2024 • Adarsha Balaji, Ramyad Hadidi, Gregory Kollmer, Mohammed E. Fouda, Prasanna Balaprakash
Our NAS and HPS of (1) BraggNN achieves a 31. 03\% improvement in bragg peak detection accuracy with a 87. 57\% reduction in model size, and (2) PtychoNN achieves a 16. 77\% improvement in model accuracy and a 12. 82\% reduction in model size when compared to the baseline PtychoNN model.
no code implementations • 15 Apr 2024 • Romain Egele, Julio C. S. Jacques Junior, Jan N. van Rijn, Isabelle Guyon, Xavier Baró, Albert Clapés, Prasanna Balaprakash, Sergio Escalera, Thomas Moeslund, Jun Wan
Initially, we develop the tasks involved in dataset development and offer insights into their effective management (including requirements, design, implementation, evaluation, distribution, and maintenance).
1 code implementation • 7 Apr 2024 • Yixuan Sun, Ololade Sowunmi, Romain Egele, Sri Hari Krishna Narayanan, Luke Van Roekel, Prasanna Balaprakash
The experimental results show that the optimal set of hyperparameters enhanced model performance in single timestepping forecasting and greatly exceeded the baseline configuration in the autoregressive rollout for long-horizon forecasting up to 30 days.
no code implementations • 5 Apr 2024 • Romain Egele, Felix Mohr, Tom Viering, Prasanna Balaprakash
To reach high performance with deep learning, hyperparameter optimization (HPO) is essential.
2 code implementations • 9 Jan 2024 • Thomas Randall, Jaehoon Koo, Brice Videau, Michael Kruse, Xingfu Wu, Paul Hovland, Mary Hall, Rong Ge, Prasanna Balaprakash
We introduce the first generative TL-based autotuning approach based on the Gaussian copula (GC) to model the high-performing regions of the search space from prior data and then generate high-performing configurations for new tasks.
no code implementations • 20 Dec 2023 • Sajal Dash, Isaac Lyngaas, Junqi Yin, Xiao Wang, Romain Egele, Guojing Cong, Feiyi Wang, Prasanna Balaprakash
For the training of the 175 Billion parameter model and the 1 Trillion parameter model, we achieved $100\%$ weak scaling efficiency on 1024 and 3072 MI250X GPUs, respectively.
no code implementations • 6 Oct 2023 • Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens
In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences.
no code implementations • 2 Oct 2023 • Hongwei Jin, Krishnan Raghavan, George Papadimitriou, Cong Wang, Anirban Mandal, Ewa Deelman, Prasanna Balaprakash
To address this problem, we introduce an autoencoder-driven self-supervised learning~(SSL) approach that learns a summary statistic from unlabeled workflow data and estimates the normal behavior of the computational workflow in the latent space.
no code implementations • 26 Sep 2023 • Romain Egele, Tyler Chang, Yixuan Sun, Venkatram Vishwanath, Prasanna Balaprakash
Machine learning (ML) methods offer a wide range of configurable hyperparameters that have a significant influence on their performance.
no code implementations • 12 Sep 2023 • Pedro Valero-Lara, Alexis Huante, Mustafa Al Lail, William F. Godoy, Keita Teranishi, Prasanna Balaprakash, Jeffrey S. Vetter
We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e. g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e. g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA. jl, AMDGPU. jl).
no code implementations • 8 Aug 2023 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
The ability to learn continuously from an incoming data stream without catastrophic forgetting is critical to designing intelligent systems.
1 code implementation • 28 Jul 2023 • Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash
Hyperparameter optimization (HPO) is crucial for fine-tuning machine learning models but can be computationally expensive.
2 code implementations • 19 Jul 2023 • Shengli Jiang, Shiyi Qin, Reid C. Van Lehn, Prasanna Balaprakash, Victor M. Zavala
To that end, we introduce AutoGNNUQ, an automated uncertainty quantification (UQ) approach for molecular property prediction.
no code implementations • 27 Jun 2023 • William F. Godoy, Pedro Valero-Lara, Keita Teranishi, Prasanna Balaprakash, Jeffrey S. Vetter
We evaluate AI-assisted generative capabilities on fundamental numerical kernels in high-performance computing (HPC), including AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG.
no code implementations • 19 May 2023 • Krishnan Raghavan, Prasanna Balaprakash
However, the literature is quite sparse, when the data corresponding to a CL task is nonEuclidean-- data , such as graphs, point clouds or manifold, where the notion of similarity in the sense of Euclidean metric does not hold.
1 code implementation • 28 Mar 2023 • Xingfu Wu, Prasanna Balaprakash, Michael Kruse, Jaehoon Koo, Brice Videau, Paul Hovland, Valerie Taylor, Brad Geltz, Siddhartha Jana, Mary Hall
As we enter the exascale computing era, efficiently utilizing power and optimizing the performance of scientific applications under power and energy constraints has become critical and challenging.
no code implementations • 15 Mar 2023 • Lele Luan, Nesar Ramachandra, Sandipp Krishnan Ravi, Anindya Bhaduri, Piyush Pandita, Prasanna Balaprakash, Mihai Anitescu, Changjie Sun, Liping Wang
Modern computational methods, involving highly sophisticated mathematical formulations, enable several tasks like modeling complex physical phenomenon, predicting key properties and design optimization.
no code implementations • 20 Feb 2023 • Romit Maulik, Romain Egele, Krishnan Raghavan, Prasanna Balaprakash
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
no code implementations • 3 Feb 2023 • Tanwi Mallick, Joshua David Bergerson, Duane R. Verner, John K Hutchison, Leslie-Anne Levy, Prasanna Balaprakash
In comparison with a months-long process of subject-matter expert labeling, we assign category labels to the whole corpus using weak supervision and supervised learning in about 13 hours.
no code implementations • 8 Oct 2022 • Sumegha Premchandar, Sandeep Madireddy, Sanket Jantre, Prasanna Balaprakash
To this end, we propose a Unified probabilistic architecture and weight ensembling Neural Architecture Search (UraeNAS) that leverages advances in probabilistic neural architecture search and approximate Bayesian inference to generate ensembles form the joint distribution of neural network architectures and weights.
no code implementations • 3 Oct 2022 • Matthieu Dorier, Romain Egele, Prasanna Balaprakash, Jaehoon Koo, Sandeep Madireddy, Srinivasan Ramesh, Allen D. Malony, Rob Ross
Distributed data storage services tailored to specific applications have grown popular in the high-performance computing (HPC) community as a way to address I/O and storage challenges.
1 code implementation • 27 Sep 2022 • Weiheng Zhong, Tanwi Mallick, Hadi Meidani, Jane Macfarlane, Prasanna Balaprakash
Moreover, most of the existing deep learning traffic forecasting models are black box, presenting additional challenges related to explainability and interpretability.
no code implementations • 1 Jul 2022 • Romain Egele, Isabelle Guyon, Venkatram Vishwanath, Prasanna Balaprakash
Bayesian optimization (BO) is a promising approach for hyperparameter optimization of deep neural networks (DNNs), where each model training can take minutes to hours.
no code implementations • 10 Jun 2022 • Sami Khairy, Prasanna Balaprakash
The proposed estimator, which is based on the method of control variates, is used to design a multifidelity Monte Carlo RL (MFMCRL) algorithm that improves the learning of the agent in the high-fidelity environment.
no code implementations • 1 Jun 2022 • Sanket Jantre, Shrijita Bhattacharya, Nathan M. Urban, Byung-Jun Yoon, Tapabrata Maiti, Prasanna Balaprakash, Sandeep Madireddy
Additionally, while sparse subnetworks of dense models have shown promise in matching the performance of their dense counterparts and even enhancing robustness, existing methods for inducing sparsity typically incur training costs comparable to those of training a single dense model, as they either gradually prune the network during training or apply thresholding post-training.
no code implementations • 4 Apr 2022 • Tanwi Mallick, Prasanna Balaprakash, Jane Macfarlane
Our approach uses a scalable Bayesian optimization method to perform hyperparameter optimization, selects a set of high-performing configurations, fits a generative model to capture the joint distributions of the hyperparameter configurations, and trains an ensemble of models by sampling a new set of hyperparameter configurations from the generative model.
no code implementations • 29 Mar 2022 • Alec J. Linot, Joshua W. Burby, Qi Tang, Prasanna Balaprakash, Michael D. Graham, Romit Maulik
We present a data-driven modeling method that accurately captures shocks and chaotic dynamics by proposing a novel architecture, stabilized neural ordinary differential equation (ODE).
no code implementations • 4 Mar 2022 • Anirban Samaddar, Sandeep Madireddy, Prasanna Balaprakash, Tapabrata Maiti, Gustavo de los Campos, Ian Fischer
In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in the latent space.
no code implementations • 22 Feb 2022 • Sahil Bhola, Suraj Pawar, Prasanna Balaprakash, Romit Maulik
One key limitation of conventional DRL methods is their episode-hungry nature which proves to be a bottleneck for tasks which involve costly evaluations of a numerical forward model.
no code implementations • 17 Dec 2021 • Yixuan Sun, Tanwi Mallick, Prasanna Balaprakash, Jane Macfarlane
To that end, we focus on a data-centric approach to improve the accuracy and reduce the false alarm rate of traffic incident detection on highways.
no code implementations • 20 Nov 2021 • Dominic Yang, Prasanna Balaprakash, Sven Leyffer
We consider nonlinear optimization problems that involve surrogate models represented by neural networks.
no code implementations • 26 Oct 2021 • Romain Egele, Romit Maulik, Krishnan Raghavan, Bethany Lusch, Isabelle Guyon, Prasanna Balaprakash
However, building ensembles of neural networks is a challenging task because, in addition to choosing the right neural architecture or hyperparameters for each member of the ensemble, there is an added cost of training each model.
no code implementations • NeurIPS 2021 • Krishnan Raghavan, Prasanna Balaprakash
We formulate the continual learning (CL) problem via dynamic programming and model the trade-off between catastrophic forgetting and generalization as a two-player sequential game.
1 code implementation • 28 Sep 2021 • YuDong Yao, Henry Chan, Subramanian Sankaranarayanan, Prasanna Balaprakash, Ross J. Harder, Mathew J. Cherukara
The problem of phase retrieval, or the algorithmic recovery of lost phase information from measured intensity alone, underlies various imaging methods from astronomy to nanoscale imaging.
no code implementations • 10 May 2021 • Jaehoon Koo, Prasanna Balaprakash, Michael Kruse, Xingfu Wu, Paul Hovland, Mary Hall
The search space exposed by the transformation pragmas is a tree, wherein each node represents a specific combination of loop transformations that can be applied to the code resulting from the parent node's loop transformations.
1 code implementation • 27 Apr 2021 • Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Paul Hovland, Valerie Taylor, Mary Hall
In this paper, we develop a ytopt autotuning framework that leverages Bayesian optimization to explore the parameter space search and compare four different supervised learning methods within Bayesian optimization and evaluate their effectiveness.
no code implementations • 18 Apr 2021 • Fangfang Xia, Jonathan Allen, Prasanna Balaprakash, Thomas Brettin, Cristina Garcia-Cardona, Austin Clyde, Judith Cohn, James Doroshow, Xiaotian Duan, Veronika Dubinkina, Yvonne Evrard, Ya Ju Fan, Jason Gans, Stewart He, Pinyi Lu, Sergei Maslov, Alexander Partin, Maulik Shukla, Eric Stahlberg, Justin M. Wozniak, Hyunseung Yoo, George Zaki, Yitan Zhu, Rick Stevens
To provide a more rigorous assessment of model generalizability between different studies, we use machine learning to analyze five publicly available cell line-based data sets: NCI60, CTRP, GDSC, CCLE and gCSI.
no code implementations • 16 Feb 2021 • Jongeun Kim, Sven Leyffer, Prasanna Balaprakash
This problem is referred to as symbolic regression.
no code implementations • 2 Jan 2021 • Sami Khairy, Prasanna Balaprakash, Lin X. Cai, H. Vincent Poor
To enable a capacity-optimal network, a novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
no code implementations • 1 Jan 2021 • Krishnan Raghavan, Prasanna Balaprakash
Meta continual learning algorithms seek to train a model when faced with similar tasks observed in a sequential manner.
no code implementations • 30 Oct 2020 • Romain Egele, Prasanna Balaprakash, Venkatram Vishwanath, Isabelle Guyon, Zhengying Liu
Neural architecture search (NAS) is an AutoML approach that generates and evaluates multiple neural network architectures concurrently and improves the accuracy of the generated models iteratively.
no code implementations • 15 Oct 2020 • Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Paul Hovland, Valerie Taylor, Mary Hall
An autotuning is an approach that explores a search space of possible implementations/configurations of a kernel or an application by selecting and evaluating a subset of implementations/configurations on a target platform and/or use models to identify a high performance implementation/configuration.
no code implementations • 28 Aug 2020 • Tanwi Mallick, Mariam Kiran, Bashir Mohammed, Prasanna Balaprakash
Wide area networking infrastructures (WANs), particularly science and research WANs, are the backbone for moving large volumes of scientific data between experimental facilities and data centers.
no code implementations • 27 Aug 2020 • Shengli Jiang, Prasanna Balaprakash
Neural architecture search (NAS) is a promising approach to discover high-performing neural network architectures automatically.
1 code implementation • 5 Aug 2020 • R. Krishnan, Prasanna Balaprakash
Meta continual learning algorithms seek to train a model when faced with similar tasks observed in a sequential manner.
no code implementations • 16 Jul 2020 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
Using high performing configurations metalearned in the single task learning setting, we achieve superior continual learning performance on Split-MNIST, and Split-CIFAR-10 data as compared with other memory-constrained learning approaches, and match that of the state-of-the-art memory-intensive replay-based approaches.
no code implementations • ICML Workshop LifelongML 2020 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
We focus on the problem of how to achieve online continual learning under memory-constrained conditions where the input data may not be known \emph{a priori}.
no code implementations • 7 May 2020 • Sami Khairy, Prasanna Balaprakash, Lin X. Cai
In this brief, we first prove that the optimization objective in the dual linear program of a finite CMDP is a piece-wise linear convex function (PWLC) with respect to the Lagrange penalty multipliers.
no code implementations • 5 May 2020 • Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia
e-prop 1 is a promising learning algorithm that tackles this with Broadcast Alignment (a technique where network weights are replaced with random weights during feedback) and accumulated local information.
2 code implementations • 17 Apr 2020 • Tanwi Mallick, Prasanna Balaprakash, Eric Rask, Jane Macfarlane
To that end, we develop a new transfer learning approach for DCRNN, where a single model trained on data-rich regions of the highway network can be used to forecast traffic on unseen regions of the highway network.
no code implementations • 31 Jan 2020 • Sami Khairy, Prasanna Balaprakash, Lin X. Cai, Yu Cheng
In this paper, we apply the Non-Orthogonal Multiple Access (NOMA) technique to improve the massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.
no code implementations • 25 Nov 2019 • Sami Khairy, Ruslan Shaydulin, Lukasz Cincio, Yuri Alexeev, Prasanna Balaprakash
Proposed recently, the Quantum Approximate Optimization Algorithm (QAOA) is considered as one of the leading candidates for demonstrating quantum advantage in the near term.
no code implementations • 11 Nov 2019 • Sami Khairy, Ruslan Shaydulin, Lukasz Cincio, Yuri Alexeev, Prasanna Balaprakash
The Quantum Approximate Optimization Algorithm (QAOA) is arguably one of the leading quantum algorithms that can outperform classical state-of-the-art methods in the near term.
no code implementations • 10 Nov 2019 • Sandeep Madireddy, Nesar Ramachandra, Nan Li, James Butler, Prasanna Balaprakash, Salman Habib, Katrin Heitmann, The LSST Dark Energy Science Collaboration
Upcoming large astronomical surveys are expected to capture an unprecedented number of strong gravitational lensing systems.
no code implementations • 10 Nov 2019 • Peihong Jiang, Hieu Doan, Sandeep Madireddy, Rajeev Surendran Assary, Prasanna Balaprakash
Computer-assisted synthesis planning aims to help chemists find better reaction pathways faster.
2 code implementations • 24 Sep 2019 • Tanwi Mallick, Prasanna Balaprakash, Eric Rask, Jane Macfarlane
We demonstrate the efficacy of the graph-partitioning-based DCRNN approach to model the traffic on a large California highway network with 11, 160 sensor locations.
no code implementations • 22 Sep 2019 • Shashi M. Aithal, Prasanna Balaprakash
In order to reduce the number of dynamometer tests required for calibrating modern engines, a large-scale simulation-driven machine learning approach is presented in this work.
no code implementations • 18 Sep 2019 • Romit Maulik, Rajeev Surendran Array, Prasanna Balaprakash
These oxygenated molecules have adequate carbon, hydrogen, and oxygen atoms that can be used for developing new value-added molecules (chemicals or transportation fuels).
no code implementations • 18 Sep 2019 • Romit Maulik, Vishwas Rao, Sandeep Madireddy, Bethany Lusch, Prasanna Balaprakash
Rapid simulations of advection-dominated problems are vital for multiple engineering and geophysical applications.
no code implementations • 1 Sep 2019 • Prasanna Balaprakash, Romain Egele, Misha Salim, Stefan Wild, Venkatram Vishwanath, Fangfang Xia, Tom Brettin, Rick Stevens
Cancer is a complex disease, the understanding and treatment of which are being aided through increases in the volume of collected data and in the scale of deployed computing power.
no code implementations • 4 Jun 2019 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
Our results show that optimal learning rules can be dataset-dependent even within similar tasks.
no code implementations • 29 Apr 2019 • Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia
We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms.