1 code implementation • ICLR 2023 • Minjae Kim, Sangyoon Yu, Suhyun Kim, Soo-Mook Moon
Federated learning is for training a global model without collecting private local data from clients.
1 code implementation • NeurIPS 2020 • Woojeong Kim, Suhyun Kim, Mincheol Park, Geonseok Jeon
Network pruning is widely used to lighten and accelerate neural network models.
1 code implementation • 29 Jun 2023 • Yujin Kim, Dogyun Park, Dohee Kim, Suhyun Kim
We introduce NaturalInversion, a novel model inversion-based method to synthesize images that agrees well with the original data distribution without using real data.
1 code implementation • ICCV 2023 • Dogyun Park, Suhyun Kim
Assessing the fidelity and diversity of the generative model is a difficult but important issue for technological advancement.
1 code implementation • 29 Jun 2023 • Minsoo Kang, Suhyun Kim
From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
no code implementations • 14 Jul 2020 • Mincheol Park, Woojeong Kim, Suhyun Kim
Even though norm-based filter pruning methods are widely accepted, it is questionable whether the "smaller-norm-less-important" criterion is optimal in determining filters to prune.
no code implementations • 24 Jan 2024 • Minsoo Kang, Minkoo Kang, Suhyun Kim
Deep learning has made significant advances in computer vision, particularly in image classification tasks.
no code implementations • 27 Feb 2024 • Mincheol Park, DongJin Kim, Cheonjun Park, Yuna Park, Gyeong Eun Gong, Won Woo Ro, Suhyun Kim
Channel pruning is widely accepted to accelerate modern convolutional neural networks (CNNs).