no code implementations • 16 Mar 2024 • Christophe Bonneville, Xiaolong He, April Tran, Jun Sur Park, William Fries, Daniel A. Messenger, Siu Wun Cheung, Yeonjong Shin, David M. Bortz, Debojyoti Ghosh, Jiun-Shyan Chen, Jonathan Belof, Youngsoo Choi
Numerical solvers of partial differential equations (PDEs) have been widely employed for simulating physical systems.
no code implementations • 9 Mar 2024 • Jun Sur Richard Park, Siu Wun Cheung, Youngsoo Choi, Yeonjong Shin
We propose a latent space dynamics identification method, namely tLaSDI, that embeds the first and second principles of thermodynamics.
no code implementations • 22 Oct 2023 • Khemraj Shukla, Yeonjong Shin
The probability distribution of the random vector determines the statistical properties of RFG.
no code implementations • 2 Sep 2023 • SangHyun Lee, Yeonjong Shin
To tackle such a challenge, we propose a two-step training method that trains the trunk network first and then sequentially trains the branch network.
1 code implementation • 31 Aug 2021 • Zhen Zhang, Yeonjong Shin, George Em Karniadakis
We propose the GENERIC formalism informed neural networks (GFINNs) that obey the symmetric degeneracy conditions of the GENERIC formalism.
2 code implementations • 20 May 2021 • Ameya D. Jagtap, Yeonjong Shin, Kenji Kawaguchi, George Em Karniadakis
We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.
no code implementations • 6 Apr 2021 • Yeonjong Shin, Jérôme Darbon, George Em Karniadakis
We propose three versions -- non-adaptive, adaptive terminal and adaptive order.
no code implementations • 14 Jul 2020 • Mark Ainsworth, Yeonjong Shin
No assumptions are made on the number of neurons relative to the number of training data, and our results hold for both the lazy and adaptive regimes.
no code implementations • 3 Apr 2020 • Yeonjong Shin, Jerome Darbon, George Em. Karniadakis
By adapting the Schauder approach and the maximum principle, we show that the sequence of minimizers strongly converges to the PDE solution in $C^0$.
no code implementations • 14 Oct 2019 • Yeonjong Shin
We show that when the orthogonal-like initialization is employed, the width of intermediate layers plays no role in gradient-based training, as long as the width is greater than or equal to both the input and output dimensions.
no code implementations • 23 Jul 2019 • Yeonjong Shin, George Em. Karniadakis
In order to quantify the trainability, we study the probability distribution of the number of active neurons at the initialization.
no code implementations • 15 Mar 2019 • Lu Lu, Yeonjong Shin, Yanhui Su, George Em. Karniadakis
Numerical examples are provided to demonstrate the effectiveness of the new initialization procedure.