Search Results for author: Hikaru Ibayashi

Found 3 papers, 3 papers with code

Allegro-Legato: Scalable, Fast, and Robust Neural-Network Quantum Molecular Dynamics via Sharpness-Aware Minimization

1 code implementation14 Mar 2023 Hikaru Ibayashi, Taufeq Mohammed Razakh, Liqiu Yang, Thomas Linker, Marco Olguin, Shinnosuke Hattori, Ye Luo, Rajiv K. Kalia, Aiichiro Nakano, Ken-ichi Nomura, Priya Vashishta

Specifically, Allegro-Legato exhibits much weaker dependence of timei-to-failure on the problem size, $t_{\textrm{failure}} \propto N^{-0. 14}$ ($N$ is the number of atoms) compared to the SOTA Allegro model $\left(t_{\textrm{failure}} \propto N^{-0. 29}\right)$, i. e., systematically delayed time-to-failure, thus allowing much larger and longer NNQMD simulations without failure.

Exponential escape efficiency of SGD from sharp minima in non-stationary regime

1 code implementation7 Nov 2021 Hikaru Ibayashi, Masaaki Imaizumi

An "escape efficiency" has been an attractive notion to tackle this question, which measures how SGD efficiently escapes from sharp minima with potentially low generalization performance.

Open-Ended Question Answering

Minimum sharpness: Scale-invariant parameter-robustness of neural networks

1 code implementation23 Jun 2021 Hikaru Ibayashi, Takuo Hamaguchi, Masaaki Imaizumi

Toward achieving robust and defensive neural networks, the robustness against the weight parameters perturbations, i. e., sharpness, attracts attention in recent years (Sun et al., 2020).

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.