1 code implementation • 14 Mar 2023 • Hikaru Ibayashi, Taufeq Mohammed Razakh, Liqiu Yang, Thomas Linker, Marco Olguin, Shinnosuke Hattori, Ye Luo, Rajiv K. Kalia, Aiichiro Nakano, Ken-ichi Nomura, Priya Vashishta
Specifically, Allegro-Legato exhibits much weaker dependence of timei-to-failure on the problem size, $t_{\textrm{failure}} \propto N^{-0. 14}$ ($N$ is the number of atoms) compared to the SOTA Allegro model $\left(t_{\textrm{failure}} \propto N^{-0. 29}\right)$, i. e., systematically delayed time-to-failure, thus allowing much larger and longer NNQMD simulations without failure.
1 code implementation • 7 Nov 2021 • Hikaru Ibayashi, Masaaki Imaizumi
An "escape efficiency" has been an attractive notion to tackle this question, which measures how SGD efficiently escapes from sharp minima with potentially low generalization performance.
1 code implementation • 23 Jun 2021 • Hikaru Ibayashi, Takuo Hamaguchi, Masaaki Imaizumi
Toward achieving robust and defensive neural networks, the robustness against the weight parameters perturbations, i. e., sharpness, attracts attention in recent years (Sun et al., 2020).