Search Results for author: Suhyun Kang

Found 6 papers, 2 papers with code

Towards a Better Evaluation of Out-of-Domain Generalization

no code implementations30 May 2024 Duhun Hwang, Suhyun Kang, Moonjung Eo, Jimyeong Kim, Wonjong Rhee

In the pursuit of this objective, average measure has been employed as the prevalent measure for evaluating models and comparing algorithms in the existing DG studies.

Domain Generalization

A Differentiable Framework for End-to-End Learning of Hybrid Structured Compression

no code implementations21 Sep 2023 Moonjung Eo, Suhyun Kang, Wonjong Rhee

In this study, we develop a \textit{Differentiable Framework~(DF)} that can express filter selection, rank selection, and budget constraint into a single analytical formulation.

Scheduling

Towards a Rigorous Analysis of Mutual Information in Contrastive Learning

no code implementations30 Aug 2023 Kyungeun Lee, Jaeill Kim, Suhyun Kang, Wonjong Rhee

Contrastive learning has emerged as a cornerstone in recent achievements of unsupervised representation learning.

Contrastive Learning Misconceptions +1

VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution

1 code implementation CVPR 2023 Jaeill Kim, Suhyun Kang, Duhun Hwang, Jungwook Shin, Wonjong Rhee

Since the introduction of deep learning, a wide scope of representation properties, such as decorrelation, whitening, disentanglement, rank, isotropy, and mutual information, have been studied to improve the quality of representation.

Disentanglement Domain Generalization +7

Meta-Learning with a Geometry-Adaptive Preconditioner

1 code implementation CVPR 2023 Suhyun Kang, Duhun Hwang, Moonjung Eo, Taesup Kim, Wonjong Rhee

In this study, we propose Geometry-Adaptive Preconditioned gradient descent (GAP) that can overcome the limitations in MAML; GAP can efficiently meta-learn a preconditioner that is dependent on task-specific parameters, and its preconditioner can be shown to be a Riemannian metric.

Few-Shot Image Classification Few-Shot Learning

A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank

no code implementations30 Nov 2021 Moonjung Eo, Suhyun Kang, Wonjong Rhee

The resulting BSR (Beam-search and Stable Rank) algorithm requires only a single hyperparameter to be tuned for the desired compression ratio.

Low-rank compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.