no code implementations • 6 Dec 2023 • Seungju Cho, Hongsin Lee, Changick Kim
We observe that combining incremental learning with naive adversarial training easily leads to a loss of robustness.
no code implementations • 6 Dec 2023 • Hongsin Lee, Seungju Cho, Changick Kim
In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient.
1 code implementation • CVPR 2023 • Junyoung Byun, Myung-Joon Kwon, Seungju Cho, Yoonji Kim, Changick Kim
Deep neural networks are widely known to be susceptible to adversarial examples, which can cause incorrect predictions through subtle input modifications.
1 code implementation • 7 Dec 2022 • Jinyoung Park, Minseok Son, Seungju Cho, Inyoung Lee, Changick Kim
This paper presents a solution to the Weather4cast 2022 Challenge Stage 2.
2 code implementations • CVPR 2022 • Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim
To tackle this limitation, we propose the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class.
no code implementations • 15 Feb 2022 • Byeongjun Park, JeongSoo Kim, Seungju Cho, Heeseon Kim, Changick Kim
Here, we propose a unified framework and introduce two datasets for long-tailed camera-trap recognition.
no code implementations • 15 Feb 2021 • Mingu Kang, Trung Quang Tran, Seungju Cho, Daeyoung Kim
Adversarial attack is aimed at fooling the target classifier with imperceptible perturbation.
no code implementations • 28 Feb 2020 • Seungju Cho, Tae Joon Jun, Mingu Kang, Daeyoung Kim
However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack.
no code implementations • 14 Aug 2019 • Seungju Cho, Tae Joon Jun, Byungsoo Oh, Daeyoung Kim
Nowadays, Deep learning techniques show dramatic performance on computer vision area, and they even outperform human.