no code implementations • 12 Mar 2024 • Hyungi Lee, Giung Nam, Edwin Fong, Juho Lee
The nonparametric learning (NPL) method is a recent approach that employs a nonparametric prior for posterior sampling, efficiently accounting for model misspecification scenarios, which is suitable for transfer learning scenarios that may involve the distribution shift between upstream and downstream tasks.
no code implementations • 11 Mar 2024 • JungWon Choi, Hyungi Lee, Byung-Hoon Kim, Juho Lee
Although generative self-supervised learning techniques, especially masked autoencoders, have shown promising results in representation learning in various domains, their application to dynamic graphs for dynamic functional connectivity remains underexplored, facing challenges in capturing high-level semantic representations.
1 code implementation • 20 Jun 2023 • Eunggu Yun, Hyungi Lee, Giung Nam, Juho Lee
While this provides a way to efficiently train ensembles, for inference, multiple forward passes should still be executed using all the ensemble parameters, which often becomes a serious bottleneck for real-world deployment.
no code implementations • 1 Jun 2023 • Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee
The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type.
no code implementations • 24 May 2023 • Moonseok Choi, Hyungi Lee, Giung Nam, Juho Lee
Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands.
no code implementations • 19 Apr 2023 • Hyungi Lee, Eunggu Yun, Giung Nam, Edwin Fong, Juho Lee
Based on this result, instead of assuming any form of the latent variables, we equip a NP with a predictive distribution implicitly defined with neural networks and use the corresponding martingale posteriors as the source of uncertainty.
1 code implementation • 30 Jun 2022 • Giung Nam, Hyungi Lee, Byeongho Heo, Juho Lee
Ensembles of deep neural networks have demonstrated superior performance, but their heavy computational cost hinders applying them for resource-limited environments.
1 code implementation • ICLR 2022 • Hyungi Lee, Eunggu Yun, Hongseok Yang, Juho Lee
We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes.