no code implementations • 12 Jan 2025 • Sanghyun Hong, Fan Wu, Anthony Gruber, Kookjin Lee
By accurately learning underlying dynamics in data in the form of differential equations, NODEs have been widely adopted in various domains, such as healthcare, finance, computer vision, and language modeling.
no code implementations • 6 Dec 2024 • Minji Kim, Tianshu Wen, Kookjin Lee, Youngsoo Choi
This study presents the conditional neural fields for reduced-order modeling (CNF-ROM) framework to approximate solutions of parametrized partial differential equations (PDEs).
no code implementations • 9 Oct 2024 • Mingu Kang, Dongseok Lee, Woojin Cho, Jaehyeon Park, Kookjin Lee, Anthony Gruber, Youngjoon Hong, Noseong Park
Large language models (LLMs), like ChatGPT, have shown that even trained with noisy prior data, they can generalize effectively to new tasks through in-context learning (ICL) and pre-training techniques.
no code implementations • 5 Oct 2024 • Woojin Cho, Kookjin Lee, Noseong Park, Donsub Rim, Gerrit Welper
We introduce Sparse Physics Informed Backpropagation (SPInProp), a new class of methods for accelerating backpropagation for a specialized neural network architecture called Low Rank Neural Representation (LRNR).
no code implementations • 5 Sep 2024 • Sheng Cheng, Deqian Kong, Jianwen Xie, Kookjin Lee, Ying Nian Wu, Yezhou Yang
This family of models generates each data point in the time series by a neural emission model, which is a non-linear transformation of a latent state vector.
no code implementations • 18 Aug 2024 • Woojin Cho, Minju Jo, Haksoo Lim, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park
Complex physical systems are often described by partial differential equations (PDEs) that depend on parameters such as the Reynolds number in fluid mechanics.
no code implementations • 25 May 2024 • Anthony Gruber, Kookjin Lee, Haksoo Lim, Noseong Park, Nathaniel Trask
Metriplectic systems are learned from data in a way that scales quadratically in both the size of the state and the rank of the metriplectic data.
no code implementations • 20 Feb 2024 • Jinsung Jeon, Hyundong Jin, Jonghyun Choi, Sanghyun Hong, Dongeun Lee, Kookjin Lee, Noseong Park
Extensively evaluating methods with seven image recognition benchmarks, we show that the proposed PAC-FNO improves the performance of existing baseline models on images with various resolutions by up to 77. 1% and various types of natural variations in the images at inference.
1 code implementation • 19 Dec 2023 • Youn-Yeol Yu, Jeongwhan Choi, Woojin Cho, Kookjin Lee, Nayong Kim, Kiseok Chang, Chang-Seung Woo, Ilho Kim, Seok-Woo Lee, Joon-Young Yang, Sooyoung Yoon, Noseong Park
These methods are typically designed to i) reduce the computational cost in solving physical dynamics and/or ii) propose techniques to enhance the solution accuracy in fluid and rigid body dynamics.
Ranked #1 on
Physical Simulations
on Deformable Plate
no code implementations • 16 Dec 2023 • Woojin Cho, Seunghyeon Cho, Hyundong Jin, Jinsung Jeon, Kookjin Lee, Sanghyun Hong, Dongeun Lee, Jonghyun Choi, Noseong Park
Neural ordinary differential equations (NODEs), one of the most influential works of the differential equation-based deep learning, are to continuously generalize residual networks and opened a new field.
1 code implementation • 7 Dec 2023 • Jeongwhan Choi, Hyowon Wi, Jayoung Kim, Yehjin Shin, Kookjin Lee, Nathaniel Trask, Noseong Park
We propose a graph-filter-based self-attention (GFSA) to learn a general yet effective one, whose complexity, however, is slightly larger than that of the original self-attention mechanism.
Ranked #1 on
Speech Recognition
on LibriSpeech 100h test-other
1 code implementation • NeurIPS 2023 • Anthony Gruber, Kookjin Lee, Nathaniel Trask
Recent works have shown that physics-inspired architectures allow the training of deep graph neural networks (GNNs) without oversmoothing.
2 code implementations • Knowledge and Information Systems 2023 • Hwangyong Choi, Jeongwhan Choi, Jeehyun Hwang, Kookjin Lee, Dongeun Lee, Noseong Park
Owing to the remarkable development of deep learning technology, there have been a series of efforts to build deep learning-based climate models.
no code implementations • 22 Nov 2022 • Jaehoon Lee, Chan Kim, Gyumin Lee, Haksoo Lim, Jeongwhan Choi, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park
Forecasting future outcomes from recent time series data is not easy, especially when the future data are different from the past (i. e. time series are under temporal drifts).
no code implementations • 10 Oct 2022 • Fan Wu, Sanghyun Hong, Donsub Rim, Noseong Park, Kookjin Lee
However, parameterization of dynamics using a neural network makes it difficult for humans to identify causal structures in the data.
no code implementations • 1 Oct 2022 • Kookjin Lee, Nathaniel Trask
In this study, we propose parameter-varying neural ordinary differential equations (NODEs) where the evolution of model parameters is represented by partition-of-unity networks (POUNets), a mixture of experts architecture.
1 code implementation • 13 Jul 2022 • Suneghyeon Cho, Sanghyun Hong, Kookjin Lee, Noseong Park
In this work, we propose adaptive momentum estimation neural ODEs (AdamNODEs) that adaptively control the acceleration of the classical momentum-based approach.
no code implementations • 7 Feb 2022 • Nathaniel Trask, Carianne Martinez, Kookjin Lee, Brad Boyce
We introduce physics-informed multimodal autoencoders (PIMA) - a variational inference framework for discovering shared information in multimodal scientific datasets representative of high-throughput testing.
2 code implementations • 11 Nov 2021 • Jeehyun Hwang, Jeongwhan Choi, Hwangyong Choi, Kookjin Lee, Dongeun Lee, Noseong Park
On the other hand, neural ordinary differential equations (NODEs) are to learn a latent governing equation of ODE from data.
no code implementations • 29 Sep 2021 • Jungeun Kim, Seunghyun Hwang, Jeehyun Hwang, Kookjin Lee, Dongeun Lee, Noseong Park
In other words, the knowledge contained by the learned governing equation can be injected into the neural network which approximates the PDE solution function.
no code implementations • 11 Sep 2021 • Kookjin Lee, Nathaniel Trask, Panos Stinis
Discovery of dynamical systems from data forms the foundation for data-driven modeling and recently, structure-preserving geometric perspectives have been shown to provide improved forecasting, stability, and physical realizability guarantees.
no code implementations • 7 Jul 2021 • Nat Trask, Mamikon Gulian, Andy Huang, Kookjin Lee
We enrich POU-Nets with a Gaussian noise model to obtain a probabilistic generalization amenable to gradient-based minimization of a maximum likelihood loss.
no code implementations • NeurIPS 2021 • Kookjin Lee, Nathaniel A. Trask, Panos Stinis
Forecasting of time-series data requires imposition of inductive biases to obtain predictive extrapolation, and recent works have imposed Hamiltonian/Lagrangian form to preserve structure for systems with reversible dynamics.
no code implementations • 27 Jan 2021 • Kookjin Lee, Nathaniel A. Trask, Ravi G. Patel, Mamikon A. Gulian, Eric C. Cyr
Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials.
no code implementations • 1 Jan 2021 • Jungeun Kim, Seunghyun Hwang, Jihyun Hwang, Kookjin Lee, Dongeun Lee, Noseong Park
Neural ordinary differential equations (neural ODEs) introduced an approach to approximate a neural network as a system of ODEs after considering its layer as a continuous variable and discretizing its hidden dimension.
1 code implementation • 4 Dec 2020 • Jungeun Kim, Kookjin Lee, Dongeun Lee, Sheo Yon Jin, Noseong Park
We present a method for learning dynamics of complex physical processes described by time-dependent nonlinear partial differential equations (PDEs).
no code implementations • 28 Oct 2020 • Kookjin Lee, Eric J. Parish
This work proposes an extension of neural ordinary differential equations (NODEs) by introducing an additional set of ODE input parameters to NODEs.
no code implementations • 21 Sep 2019 • Kookjin Lee, Kevin Carlberg
In contrast to existing methods for latent dynamics learning, this is the only method that both employs a nonlinear embedding and computes dynamics for the latent state that guarantee the satisfaction of prescribed physical properties.
Computational Physics
no code implementations • 11 Jun 2019 • Duanshun Li, Jing Liu, Noseong Park, Dongeun Lee, Giridhar Ramachandran, Ali Seyedmazloom, Kookjin Lee, Chen Feng, Vadim Sokolov, Rajesh Ganesan
0-1 knapsack is of fundamental importance in computer science, business, operations research, etc.
no code implementations • 26 Jul 2017 • Noseong Park, Ankesh Anand, Joel Ruben Antony Moniz, Kookjin Lee, Tanmoy Chakraborty, Jaegul Choo, Hongkyu Park, Young-Min Kim
MMGAN finds two manifolds representing the vector representations of real and fake images.