Search Results for author: Hyungi Lee

Found 8 papers, 3 papers with code

Enhancing Transfer Learning with Flexible Nonparametric Posterior Sampling

no code implementations12 Mar 2024 Hyungi Lee, Giung Nam, Edwin Fong, Juho Lee

The nonparametric learning (NPL) method is a recent approach that employs a nonparametric prior for posterior sampling, efficiently accounting for model misspecification scenarios, which is suitable for transfer learning scenarios that may involve the distribution shift between upstream and downstream tasks.

Transfer Learning

Joint-Embedding Masked Autoencoder for Self-supervised Learning of Dynamic Functional Connectivity from the Human Brain

no code implementations11 Mar 2024 JungWon Choi, Hyungi Lee, Byung-Hoon Kim, Juho Lee

Although generative self-supervised learning techniques, especially masked autoencoders, have shown promising results in representation learning in various domains, their application to dynamic graphs for dynamic functional connectivity remains underexplored, facing challenges in capturing high-level semantic representations.

Representation Learning Self-Supervised Learning

Traversing Between Modes in Function Space for Fast Ensembling

1 code implementation20 Jun 2023 Eunggu Yun, Hyungi Lee, Giung Nam, Juho Lee

While this provides a way to efficiently train ensembles, for inference, multiple forward passes should still be executed using all the ensemble parameters, which often becomes a serious bottleneck for real-world deployment.

Regularizing Towards Soft Equivariance Under Mixed Symmetries

no code implementations1 Jun 2023 Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee

The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type.

Motion Forecasting

SWAMP: Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning

no code implementations24 May 2023 Moonseok Choi, Hyungi Lee, Giung Nam, Juho Lee

Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands.

Martingale Posterior Neural Processes

no code implementations19 Apr 2023 Hyungi Lee, Eunggu Yun, Giung Nam, Edwin Fong, Juho Lee

Based on this result, instead of assuming any form of the latent variables, we equip a NP with a predictive distribution implicitly defined with neural networks and use the corresponding martingale posteriors as the source of uncertainty.

Bayesian Inference Gaussian Processes

Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation

1 code implementation30 Jun 2022 Giung Nam, Hyungi Lee, Byeongho Heo, Juho Lee

Ensembles of deep neural networks have demonstrated superior performance, but their heavy computational cost hinders applying them for resource-limited environments.

Image Classification

Scale Mixtures of Neural Network Gaussian Processes

1 code implementation ICLR 2022 Hyungi Lee, Eunggu Yun, Hongseok Yang, Juho Lee

We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.