no code implementations • 1 Apr 2024 • Giung Nam, Byeongho Heo, Juho Lee
Large-scale contrastive vision-language pre-trained models provide the zero-shot model achieving competitive performance across a range of image classification tasks without requiring training on downstream data.
no code implementations • 12 Mar 2024 • Hyungi Lee, Giung Nam, Edwin Fong, Juho Lee
The nonparametric learning (NPL) method is a recent approach that employs a nonparametric prior for posterior sampling, efficiently accounting for model misspecification scenarios, which is suitable for transfer learning scenarios that may involve the distribution shift between upstream and downstream tasks.
1 code implementation • 20 Jun 2023 • Eunggu Yun, Hyungi Lee, Giung Nam, Juho Lee
While this provides a way to efficiently train ensembles, for inference, multiple forward passes should still be executed using all the ensemble parameters, which often becomes a serious bottleneck for real-world deployment.
no code implementations • 24 May 2023 • Moonseok Choi, Hyungi Lee, Giung Nam, Juho Lee
Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands.
no code implementations • 19 Apr 2023 • Hyungi Lee, Eunggu Yun, Giung Nam, Edwin Fong, Juho Lee
Based on this result, instead of assuming any form of the latent variables, we equip a NP with a predictive distribution implicitly defined with neural networks and use the corresponding martingale posteriors as the source of uncertainty.
no code implementations • 19 Apr 2023 • Giung Nam, Sunguk Jang, Juho Lee
Decoupling representation learning and classifier learning has been shown to be effective in classification with long-tailed data.
1 code implementation • 30 Jun 2022 • Giung Nam, Hyungi Lee, Byeongho Heo, Juho Lee
Ensembles of deep neural networks have demonstrated superior performance, but their heavy computational cost hinders applying them for resource-limited environments.
no code implementations • NeurIPS 2021 • Giung Nam, Jongmin Yoon, Yoonho Lee, Juho Lee
We propose a simple approach for reducing this gap, i. e., making the distilled performance close to the full ensemble.