Search Results for author: Sukmin Yun

Found 10 papers, 8 papers with code

IFSeg: Image-free Semantic Segmentation via Vision-Language Model

1 code implementation CVPR 2023 Sukmin Yun, Seong Hyeon Park, Paul Hongsuck Seo, Jinwoo Shin

In this paper, we introduce a novel image-free segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any task-specific images and annotations.

Image Segmentation Language Modelling +3

TSPipe: Learn from Teacher Faster with Pipelines

1 code implementation ICML 2022 Hwijoon Lim, Yechan Kim, Sukmin Yun, Jinwoo Shin, Dongsu Han

The teacher-student (TS) framework, training a (student) network by utilizing an auxiliary superior (teacher) network, has been adopted as a popular training paradigm in many machine learning schemes, since the seminal work---Knowledge distillation (KD) for model compression and transfer learning.

Knowledge Distillation Self-Supervised Learning +1

Patch-level Representation Learning for Self-supervised Vision Transformers

1 code implementation CVPR 2022 Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

Despite its simplicity, we demonstrate that it can significantly improve the performance of existing SSL methods for various visual tasks, including object detection and semantic segmentation.

Instance Segmentation object-detection +5

PASS: Patch-Aware Self-Supervision for Vision Transformer

no code implementations29 Sep 2021 Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

This paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state-of-the-art visual pretext tasks for self-supervised learning do not enjoy the benefit, i. e., they are architecture-agnostic.

object-detection Object Detection +3

OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data

1 code implementation29 Jun 2021 Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin

Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.

Contrastive Learning Representation Learning

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

Cannot find the paper you are looking for? You can Submit a new open access paper.