Search Results for author: Junhyug Noh

Found 10 papers, 5 papers with code

Rethinking Class Activation Mapping for Weakly Supervised Object Localization

1 code implementation ECCV 2020 Wonho Bae, Junhyug Noh, Gunhee Kim

Weakly supervised object localization (WSOL) is a task of localizing an object in an image only using image-level labels.

Object Weakly-Supervised Object Localization

Generalized Coverage for More Robust Low-Budget Active Learning

no code implementations16 Jul 2024 Wonho Bae, Junhyug Noh, Danica J. Sutherland

The ProbCover method of Yehuda et al. is a well-motivated algorithm for active learning in low-budget regimes, which attempts to "cover" the data distribution with balls of a given radius at selected data points.

Active Learning Image Classification

Scalp Diagnostic System With Label-Free Segmentation and Training-Free Image Translation

1 code implementation25 Jun 2024 Youngmin Kim, Saejin Kim, Hoyeon Moon, Youngjae Yu, Junhyug Noh

To address these issues, we propose ScalpVision, an AI-driven system for the holistic diagnosis of scalp diseases and alopecia.

Management

Object Discovery via Contrastive Learning for Weakly Supervised Object Detection

1 code implementation16 Aug 2022 Jinhwan Seo, Wonho Bae, Danica J. Sutherland, Junhyug Noh, Daijin Kim

Weakly Supervised Object Detection (WSOD) is a task that detects objects in an image using a model trained only on image-level annotations.

Contrastive Learning Object +2

One Weird Trick to Improve Your Semi-Weakly Supervised Semantic Segmentation Model

no code implementations2 May 2022 Wonho Bae, Junhyug Noh, Milad Jalali Asadabadi, Danica J. Sutherland

Semi-weakly supervised semantic segmentation (SWSSS) aims to train a model to identify objects in images based on a small number of images with pixel-level labels, and many more images with only image-level labels.

Pseudo Label Segmentation +2

What and When to Look?: Temporal Span Proposal Network for Video Relation Detection

1 code implementation15 Jul 2021 Sangmin Woo, Junhyug Noh, Kangil Kim

TSPN tells when to look: it simultaneously predicts start-end timestamps (i. e., temporal spans) and categories of the all possible relations by utilizing full video context.

Relation Video Visual Relation Detection +1

Tackling the Challenges in Scene Graph Generation with Local-to-Global Interactions

1 code implementation16 Jun 2021 Sangmin Woo, Junhyug Noh, Kangil Kim

To quantify how much LOGIN is aware of relational direction, a new diagnostic task called Bidirectional Relationship Classification (BRC) is also proposed.

Bidirectional Relationship Classification Graph Generation +4

Better to Follow, Follow to Be Better: Towards Precise Supervision of Feature Super-Resolution for Small Object Detection

no code implementations ICCV 2019 Junhyug Noh, Wonho Bae, Wonhee Lee, Jinhwan Seo, Gunhee Kim

In spite of recent success of proposal-based CNN models for object detection, it is still difficult to detect small objects due to the limited and distorted information that small region of interests (RoI) contain.

object-detection Small Object Detection +1

Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors

no code implementations CVPR 2018 Junhyug Noh, Soochan Lee, Beomsu Kim, Gunhee Kim

We propose methods of addressing two critical issues of pedestrian detection: (i) occlusion of target objects as false negative failure, and (ii) confusion with hard negative examples like vertical structures as false positive failure.

Occlusion Handling Pedestrian Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.