Search Results for author: Yeonjong Shin

Found 12 papers, 2 papers with code

tLaSDI: Thermodynamics-informed latent space dynamics identification

no code implementations9 Mar 2024 Jun Sur Richard Park, Siu Wun Cheung, Youngsoo Choi, Yeonjong Shin

We propose a latent space dynamics identification method, namely tLaSDI, that embeds the first and second principles of thermodynamics.

Dimensionality Reduction

Randomized Forward Mode of Automatic Differentiation For Optimization Algorithms

no code implementations22 Oct 2023 Khemraj Shukla, Yeonjong Shin

The probability distribution of the random vector determines the statistical properties of RFG.

On the training and generalization of deep operator networks

no code implementations2 Sep 2023 SangHyun Lee, Yeonjong Shin

To tackle such a challenge, we propose a two-step training method that trains the trunk network first and then sequentially trains the branch network.

GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems

1 code implementation31 Aug 2021 Zhen Zhang, Yeonjong Shin, George Em Karniadakis

We propose the GENERIC formalism informed neural networks (GFINNs) that obey the symmetric degeneracy conditions of the GENERIC formalism.

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions

2 code implementations20 May 2021 Ameya D. Jagtap, Yeonjong Shin, Kenji Kawaguchi, George Em Karniadakis

We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.

A Caputo fractional derivative-based algorithm for optimization

no code implementations6 Apr 2021 Yeonjong Shin, Jérôme Darbon, George Em Karniadakis

We propose three versions -- non-adaptive, adaptive terminal and adaptive order.

Plateau Phenomenon in Gradient Descent Training of ReLU networks: Explanation, Quantification and Avoidance

no code implementations14 Jul 2020 Mark Ainsworth, Yeonjong Shin

No assumptions are made on the number of neurons relative to the number of training data, and our results hold for both the lazy and adaptive regimes.

On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs

no code implementations3 Apr 2020 Yeonjong Shin, Jerome Darbon, George Em. Karniadakis

By adapting the Schauder approach and the maximum principle, we show that the sequence of minimizers strongly converges to the PDE solution in $C^0$.

Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks

no code implementations14 Oct 2019 Yeonjong Shin

We show that when the orthogonal-like initialization is employed, the width of intermediate layers plays no role in gradient-based training, as long as the width is greater than or equal to both the input and output dimensions.

Trainability of ReLU networks and Data-dependent Initialization

no code implementations23 Jul 2019 Yeonjong Shin, George Em. Karniadakis

In order to quantify the trainability, we study the probability distribution of the number of active neurons at the initialization.

Dying ReLU and Initialization: Theory and Numerical Examples

no code implementations15 Mar 2019 Lu Lu, Yeonjong Shin, Yanhui Su, George Em. Karniadakis

Numerical examples are provided to demonstrate the effectiveness of the new initialization procedure.

Cannot find the paper you are looking for? You can Submit a new open access paper.