Search Results for author: Seungju Cho

Found 9 papers, 3 papers with code

Class Incremental Learning for Adversarial Robustness

no code implementations6 Dec 2023 Seungju Cho, Hongsin Lee, Changick Kim

We observe that combining incremental learning with naive adversarial training easily leads to a loss of robustness.

Adversarial Robustness class-incremental learning +3

Indirect Gradient Matching for Adversarial Robust Distillation

no code implementations6 Dec 2023 Hongsin Lee, Seungju Cho, Changick Kim

In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient.

Adversarial Robustness Data Augmentation

Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup

1 code implementation CVPR 2023 Junyoung Byun, Myung-Joon Kwon, Seungju Cho, Yoonji Kim, Changick Kim

Deep neural networks are widely known to be susceptible to adversarial examples, which can cause incorrect predictions through subtle input modifications.

Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

2 code implementations CVPR 2022 Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim

To tackle this limitation, we propose the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class.

Face Verification Image Augmentation +1

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

no code implementations28 Feb 2020 Seungju Cho, Tae Joon Jun, Mingu Kang, Daeyoung Kim

However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack.

Adversarial Attack Tensor Decomposition

DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

no code implementations14 Aug 2019 Seungju Cho, Tae Joon Jun, Byungsoo Oh, Daeyoung Kim

Nowadays, Deep learning techniques show dramatic performance on computer vision area, and they even outperform human.

Adversarial Attack Denoising +5

Cannot find the paper you are looking for? You can Submit a new open access paper.