Search Results for author: Jongmin Yu

Found 16 papers, 6 papers with code

Road Surface Defect Detection -- From Image-based to Non-image-based: A Survey

no code implementations6 Feb 2024 Jongmin Yu, Jiaqi Jiang, Sebastiano Fichera, Paolo Paoletti, Lisa Layzell, Devansh Mehta, Shan Luo

As a result, there has been a growing interest in the literature on the subject, leading to the development of various road surface defect detection methods.

Defect Detection

Multi-class Road Defect Detection and Segmentation using Spatial and Channel-wise Attention for Autonomous Road Repairing

no code implementations6 Feb 2024 Jongmin Yu, Chen Bene Chi, Sebastiano Fichera, Paolo Paoletti, Devansh Mehta, Shan Luo

To demonstrate the effectiveness of our framework, we conducted various ablation studies and comparisons with prior methods on a newly collected dataset annotated with nine road defect classes.

Defect Detection Instance Segmentation +2

Adversarial Denoising Diffusion Model for Unsupervised Anomaly Detection

no code implementations7 Dec 2023 Jongmin Yu, Hyeontaek Oh, Jinhong Yang

With the addition of explicit adversarial learning on data samples, ADDM can learn the semantic characteristics of the data more robustly during training, which achieves a similar data sampling performance with much fewer sampling steps than DDPM.

Denoising Unsupervised Anomaly Detection

Active anomaly detection based on deep one-class classification

no code implementations18 Sep 2023 Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi

In an active learning framework, a model queries samples to be labeled by experts and re-trains the model with the labeled data samples.

Active Learning One-Class Classification

An Iterative Method for Unsupervised Robust Anomaly Detection Under Data Contamination

no code implementations18 Sep 2023 Minkyung Kim, Jongmin Yu, Junsik Kim, Tae-Hyun Oh, Jun Kyun Choi

Therefore, it has been a common practice to learn normality under the assumption that anomalous data are absent in a training dataset, which we call normality assumption.

One-Class Classification

Unsupervised Deep One-Class Classification with Adaptive Threshold based on Training Dynamics

no code implementations13 Feb 2023 Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi

One-class classification has been a prevailing method in building deep anomaly detection models under the assumption that a dataset consisting of normal samples is available.

One-Class Classification Outlier Detection

Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination

1 code implementation28 Oct 2021 Jongmin Yu, Hyeontaek Oh, Minkyung Kim, Junsik Kim

In this paper, we propose Normality-Calibrated Autoencoder (NCAE), which can boost anomaly detection performance on the contaminated datasets without any prior information or explicit abnormal samples in the training phase.

Unsupervised Anomaly Detection

Unsupervised Vehicle Re-Identification via Self-supervised Metric Learning using Feature Dictionary

1 code implementation3 Mar 2021 Jongmin Yu, Hyeontaek Oh

The results of DPLM are applied to dictionary-based triplet loss (DTL) to improve the discriminativeness of learnt features and to refine the quality of the results of DPLM progressively.

Domain Adaptation Metric Learning +2

Context-Aware Multi-Task Learning for Traffic Scene Recognition in Autonomous Vehicles

no code implementations3 Apr 2020 Younkwan Lee, Jihyo Jeon, Jongmin Yu, Moongu Jeon

Specifically, we present a lower bound for the mutual information constraint between shared feature embedding and input that is considered to be able to extract common contextual information across tasks while preserving essential information of each task jointly.

Autonomous Vehicles Multi-Task Learning +1

Unsupervised Pixel-level Road Defect Detection via Adversarial Image-to-Frequency Transform

1 code implementation30 Jan 2020 Jongmin Yu, Duyong Kim, Younkwan Lee, Moongu Jeon

To end this, we propose an unsupervised approach to detecting road defects, using Adversarial Image-to-Frequency Transform (AIFT).

Defect Detection

Drivers Drowsiness Detection using Condition-Adaptive Representation Learning Framework

no code implementations22 Oct 2019 Jongmin Yu, Sangwoo Park, Sangwook Lee, Moongu Jeon

The proposed framework consists of four models: spatio-temporal representation learning, scene condition understanding, feature fusion, and drowsiness detection.

Representation Learning

Boosting Mapping Functionality of Neural Networks via Latent Feature Generation based on Reversible Learning

no code implementations21 Oct 2019 Jongmin Yu

This paper addresses a boosting method for mapping functionality of neural networks in visual recognition such as image classification and face recognition.

Data Augmentation Face Recognition +1

Boosting Network Weight Separability via Feed-Backward Reconstruction

no code implementations20 Oct 2019 Jongmin Yu, Hyeontaek Oh

To this end, we propose an evaluation metric for weight separability based on semi-orthogonality of a matrix and Frobenius distance, and the feed-backward reconstruction loss which explicitly encourages weight separability between the column vectors in the weight matrix.

Face Recognition Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.