1 code implementation • 16 Apr 2024 • Woomin Song, Seunghyuk Oh, Sangwoo Mo, Jaehyung Kim, Sukmin Yun, Jung-Woo Ha, Jinwoo Shin
Large language models (LLMs) have shown remarkable performance in various natural language processing tasks.
1 code implementation • CVPR 2023 • Sukmin Yun, Seong Hyeon Park, Paul Hongsuck Seo, Jinwoo Shin
In this paper, we introduce a novel image-free segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any task-specific images and annotations.
1 code implementation • 19 Jul 2022 • Sukmin Yun, Jaehyung Kim, Dongyoon Han, Hwanjun Song, Jung-Woo Ha, Jinwoo Shin
Understanding temporal dynamics of video is an essential aspect of learning better video representations.
1 code implementation • ICML 2022 • Hwijoon Lim, Yechan Kim, Sukmin Yun, Jinwoo Shin, Dongsu Han
The teacher-student (TS) framework, training a (student) network by utilizing an auxiliary superior (teacher) network, has been adopted as a popular training paradigm in many machine learning schemes, since the seminal work---Knowledge distillation (KD) for model compression and transfer learning.
1 code implementation • CVPR 2022 • Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin
Despite its simplicity, we demonstrate that it can significantly improve the performance of existing SSL methods for various visual tasks, including object detection and semantic segmentation.
no code implementations • 29 Sep 2021 • Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin
This paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state-of-the-art visual pretext tasks for self-supervised learning do not enjoy the benefit, i. e., they are architecture-agnostic.
1 code implementation • 29 Jun 2021 • Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin
Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.
1 code implementation • CVPR 2020 • Sukmin Yun, Jongjin Park, Kimin Lee, Jinwoo Shin
Deep neural networks with millions of parameters may suffer from poor generalization due to overfitting.
no code implementations • ICLR 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.
1 code implementation • 31 Jan 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.