no code implementations • 6 Feb 2024 • Jongmin Yu, Jiaqi Jiang, Sebastiano Fichera, Paolo Paoletti, Lisa Layzell, Devansh Mehta, Shan Luo
As a result, there has been a growing interest in the literature on the subject, leading to the development of various road surface defect detection methods.
no code implementations • 6 Feb 2024 • Jongmin Yu, Chen Bene Chi, Sebastiano Fichera, Paolo Paoletti, Devansh Mehta, Shan Luo
To demonstrate the effectiveness of our framework, we conducted various ablation studies and comparisons with prior methods on a newly collected dataset annotated with nine road defect classes.
no code implementations • 7 Dec 2023 • Jongmin Yu, Hyeontaek Oh, Jinhong Yang
With the addition of explicit adversarial learning on data samples, ADDM can learn the semantic characteristics of the data more robustly during training, which achieves a similar data sampling performance with much fewer sampling steps than DDPM.
no code implementations • 18 Sep 2023 • Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi
In an active learning framework, a model queries samples to be labeled by experts and re-trains the model with the labeled data samples.
no code implementations • 18 Sep 2023 • Minkyung Kim, Jongmin Yu, Junsik Kim, Tae-Hyun Oh, Jun Kyun Choi
Therefore, it has been a common practice to learn normality under the assumption that anomalous data are absent in a training dataset, which we call normality assumption.
no code implementations • 13 Feb 2023 • Minkyung Kim, Junsik Kim, Jongmin Yu, Jun Kyun Choi
One-class classification has been a prevailing method in building deep anomaly detection models under the assumption that a dataset consisting of normal samples is available.
1 code implementation • 28 Oct 2021 • Jongmin Yu, Hyeontaek Oh, Minkyung Kim, Junsik Kim
In this paper, we propose Normality-Calibrated Autoencoder (NCAE), which can boost anomaly detection performance on the contaminated datasets without any prior information or explicit abnormal samples in the training phase.
1 code implementation • 14 Sep 2021 • Jongmin Yu, Junsik Kim, Minkyung Kim, Hyeontaek Oh
However, this achievement requires large-scale and well-annotated datasets.
1 code implementation • 16 Jun 2021 • Jongmin Yu, Hyeontaek Oh
The proposed GSMLP and SMLC boost the performance of unsupervised person Re-ID without any pre-labelled dataset.
1 code implementation • 3 Mar 2021 • Jongmin Yu, Hyeontaek Oh
The results of DPLM are applied to dictionary-based triplet loss (DTL) to improve the discriminativeness of learnt features and to refine the quality of the results of DPLM progressively.
no code implementations • 3 Apr 2020 • Younkwan Lee, Jihyo Jeon, Jongmin Yu, Moongu Jeon
Specifically, we present a lower bound for the mutual information constraint between shared feature embedding and input that is considered to be able to extract common contextual information across tasks while preserving essential information of each task jointly.
1 code implementation • 17 Mar 2020 • Jongmin Yu, Yongsang Yoon, Moongu Jeon
In this work, we propose a skeleton-based action recognition method which is robust to noise information of given skeleton features.
1 code implementation • 30 Jan 2020 • Jongmin Yu, Duyong Kim, Younkwan Lee, Moongu Jeon
To end this, we propose an unsupervised approach to detecting road defects, using Adversarial Image-to-Frequency Transform (AIFT).
no code implementations • 22 Oct 2019 • Jongmin Yu, Sangwoo Park, Sangwook Lee, Moongu Jeon
The proposed framework consists of four models: spatio-temporal representation learning, scene condition understanding, feature fusion, and drowsiness detection.
no code implementations • 21 Oct 2019 • Jongmin Yu
This paper addresses a boosting method for mapping functionality of neural networks in visual recognition such as image classification and face recognition.
no code implementations • 20 Oct 2019 • Jongmin Yu, Hyeontaek Oh
To this end, we propose an evaluation metric for weight separability based on semi-orthogonality of a matrix and Frobenius distance, and the feed-backward reconstruction loss which explicitly encourages weight separability between the column vectors in the weight matrix.