no code implementations • 12 Jun 2024 • Massimiliano Lupo Pasini, Jong Youl Choi, Kshitij Mehta, Pei Zhang, David Rogers, Jonghyun Bae, Khaled Z. Ibrahim, Ashwin M. Aji, Karl W. Schulz, Jorda Polo, Prasanna Balaprakash
The HydraGNN architecture enables the GFM to achieve near-linear strong scaling performance using more than 2, 000 GPUs on Perlmutter and 16, 000 GPUs on Frontier.
no code implementations • 6 Oct 2023 • Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael Irvin, J. Gregory Pauloski, Logan Ward, Valerie Hayot, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian Foster, James J. Davis, Michael E. Papka, Thomas Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi Hanson, Thomas E Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin Aji, Angela Dalton, Michael Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens
In the upcoming decade, deep learning may revolutionize the natural sciences, enhancing our capacity to model and predict natural occurrences.
no code implementations • 10 Dec 2022 • Massimiliano Lupo Pasini, Luka Malenica, Kwitae Chong, Stuart Slattery
We also show that the surrogate DL model reduces the computational time to perform adaptive zoning by at least a 2x factor with respect to standard techniques without compromising the accuracy of the reconstruction of the physical quantities of interest.
no code implementations • 7 Oct 2022 • Davide Calabrò, Massimiliano Lupo Pasini, Nicola Ferro, Simona Perotto
The detection and localization of possible diseases in crops are usually automated by resorting to supervised deep learning approaches.
no code implementations • 7 Oct 2022 • YuanYuan Zhao, Massimiliano Lupo Pasini
We propose a novel deep learning (DL) approach to solve one-dimensional non-linear elliptic, parabolic, and hyperbolic problems on graphs.
no code implementations • 25 Jul 2022 • Massimiliano Lupo Pasini, Junqi Yin
We propose a stable, parallel approach to train Wasserstein Conditional Generative Adversarial Neural Networks (W-CGANs) under the constraint of a fixed computational budget.
no code implementations • 22 Jul 2022 • Jong Youl Choi, Pei Zhang, Kshitij Mehta, Andrew Blanchard, Massimiliano Lupo Pasini
Graph Convolutional Neural Network (GCNN) is a popular class of deep learning (DL) models in material science to predict material properties from the graph representation of molecular structures.
no code implementations • 1 Apr 2022 • Massimiliano Lupo Pasini, Simona Perotto
We propose a new approach to generate a reliable reduced model for a parametric elliptic problem, in the presence of noisy data.
1 code implementation • 4 Feb 2022 • Massimiliano Lupo Pasini, Pei Zhang, Samuel Temple Reeve, Jong Youl Choi
We train HydraGNN on an open-source ab initio density functional theory (DFT) dataset for iron-platinum (FePt) with a fixed body centered tetragonal (BCT) lattice structure and fixed volume to simultaneously predict the mixing enthalpy (a global feature of the system), the atomic charge transfer, and the atomic magnetic moment across configurations that span the entire compositional range.
1 code implementation • 26 Oct 2021 • Massimiliano Lupo Pasini, Junqi Yin, Viktor Reshniak, Miroslav Stoyanov
Anderson acceleration (AA) is an extrapolation technique designed to speed-up fixed-point iterations like those arising from the iterative training of DL models.
no code implementations • 21 Feb 2021 • Massimiliano Lupo Pasini, Vittorio Gabbi, Junqi Yin, Simona Perotto, Nouamane Laanait
We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models.
no code implementations • 7 Sep 2019 • Massimiliano Lupo Pasini, Junqi Yin, Ying Wai Li, Markus Eisenbach
We propose a new scalable method to optimize the architecture of an artificial neural network.