Search Results for author: Suhyun Kim

Found 8 papers, 5 papers with code

REPrune: Filter Pruning via Representative Election

no code implementations14 Jul 2020 Mincheol Park, Woojeong Kim, Suhyun Kim

Even though norm-based filter pruning methods are widely accepted, it is questionable whether the "smaller-norm-less-important" criterion is optimal in determining filters to prune.

Clustering

DepthFL: Depthwise Federated Learning for Heterogeneous Clients

1 code implementation ICLR 2023 Minjae Kim, Sangyoon Yu, Suhyun Kim, Soo-Mook Moon

Federated learning is for training a global model without collecting private local data from clients.

Federated Learning

NaturalInversion: Data-Free Image Synthesis Improving Real-World Consistency

1 code implementation29 Jun 2023 Yujin Kim, Dogyun Park, Dohee Kim, Suhyun Kim

We introduce NaturalInversion, a novel model inversion-based method to synthesize images that agrees well with the original data distribution without using real data.

Image Generation Knowledge Distillation

GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps

1 code implementation29 Jun 2023 Minsoo Kang, Suhyun Kim

From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.

Data Augmentation

Probabilistic Precision and Recall Towards Reliable Evaluation of Generative Models

1 code implementation ICCV 2023 Dogyun Park, Suhyun Kim

Assessing the fidelity and diversity of the generative model is a difficult but important issue for technological advancement.

Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN

no code implementations24 Jan 2024 Minsoo Kang, Minkoo Kang, Suhyun Kim

Deep learning has made significant advances in computer vision, particularly in image classification tasks.

Image Augmentation Image Classification

REPrune: Channel Pruning via Kernel Representative Selection

no code implementations27 Feb 2024 Mincheol Park, DongJin Kim, Cheonjun Park, Yuna Park, Gyeong Eun Gong, Won Woo Ro, Suhyun Kim

Channel pruning is widely accepted to accelerate modern convolutional neural networks (CNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.