no code implementations • 20 Dec 2024 • Hyunsoo Lee, Minsoo Kang, Bohyung Han
To this end, we derive the representation guidance with a combination of two objectives: maximizing the similarity to the target prompt based on the CLIP score and minimizing the structural distance to the source latent variable.
no code implementations • 12 Sep 2024 • Junsung Lee, Minsoo Kang, Bohyung Han
Our approach revises the original noise prediction network of a pretrained diffusion model by introducing a noise correction term.
1 code implementation • 22 May 2024 • Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff Schneider, Eduard Hovy, Roger Grosse, Eric Xing
Large language models (LLMs) are trained on a vast amount of human-written data, but data providers often remain uncredited.
no code implementations • 24 Jan 2024 • Minsoo Kang, Minkoo Kang, Suhyun Kim
Deep learning has made significant advances in computer vision, particularly in image classification tasks.
1 code implementation • 29 Jun 2023 • Minsoo Kang, Suhyun Kim
From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
no code implementations • CVPR 2023 • Minsoo Kang, Doyup Lee, Jiseob Kim, Saehoon Kim, Bohyung Han
We propose a text-to-image generation algorithm based on deep neural networks when text captions for images are unavailable during training.
no code implementations • 28 Mar 2023 • Minsoo Kang, Hyewon Yoo, Eunhee Kang, Sehwan Ki, Hyong-Euk Lee, Bohyung Han
We propose an information-theoretic knowledge distillation approach for the compression of generative adversarial networks, which aims to maximize the mutual information between teacher and student networks via a variational optimization based on an energy-based model.
1 code implementation • CVPR 2022 • Minsoo Kang, Jaeyoo Park, Bohyung Han
We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks.
no code implementations • ICCV 2021 • Jaeyoo Park, Minsoo Kang, Bohyung Han
We tackle catastrophic forgetting problem in the context of class-incremental learning for video recognition, which has not been explored actively despite the popularity of continual learning.
1 code implementation • ICML 2020 • Minsoo Kang, Bohyung Han
We propose a simple but effective data-driven channel pruning algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
no code implementations • 29 Nov 2019 • Minsoo Kang, Jonghwan Mun, Bohyung Han
We present a novel framework of knowledge distillation that is capable of learning powerful and efficient student models from ensemble teacher networks.