no code implementations • 17 Jan 2024 • Euna Jung, Jaekeol Choi, Eunggu Yun, Wonjong Rhee
Specifically, we consider \textit{On-Off pattern} and \textit{PathCount} for investigating how information is stored in deep representations.
no code implementations • 4 Dec 2023 • JungWon Choi, Seongho Keum, Eunggu Yun, Byung-Hoon Kim, Juho Lee
Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity due to the increasing availability of data and advances in model architectures, including Graph Neural Network (GNN).
no code implementations • 4 Dec 2023 • Byung-Hoon Kim, JungWon Choi, Eunggu Yun, Kyungsang Kim, Xiang Li, Juho Lee
Here, we propose a method for learning the representation of dynamic functional connectivity with Graph Transformers.
1 code implementation • 13 Aug 2023 • SeungHyun Kim, Hyunsu Kim, Eunggu Yun, Hwangrae Lee, Jaehun Lee, Juho Lee
In this paper, we propose a novel probabilistic framework for classification with multivariate time series data with missing values.
1 code implementation • 20 Jun 2023 • Eunggu Yun, Hyungi Lee, Giung Nam, Juho Lee
While this provides a way to efficiently train ensembles, for inference, multiple forward passes should still be executed using all the ensemble parameters, which often becomes a serious bottleneck for real-world deployment.
no code implementations • 19 Apr 2023 • Hyungi Lee, Eunggu Yun, Giung Nam, Edwin Fong, Juho Lee
Based on this result, instead of assuming any form of the latent variables, we equip a NP with a predictive distribution implicitly defined with neural networks and use the corresponding martingale posteriors as the source of uncertainty.
1 code implementation • ICLR 2022 • Hyungi Lee, Eunggu Yun, Hongseok Yang, Juho Lee
We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes.