1 code implementation • CVPR 2023 • Junyoung Byun, Myung-Joon Kwon, Seungju Cho, Yoonji Kim, Changick Kim
Deep neural networks are widely known to be susceptible to adversarial examples, which can cause incorrect predictions through subtle input modifications.
no code implementations • 2 Apr 2023 • Sangmin Woo, So-Yeong Jeon, Jinyoung Park, Minji Son, Sumin Lee, Changick Kim
We introduce Sketch-based Video Object Localization (SVOL), a new task aimed at localizing spatio-temporal object boxes in video queried by the input sketch.
no code implementations • 19 Feb 2023 • Youngjun Kwak, Minyoung Jung, Hunjae Yoo, JinHo Shin, Changick Kim
In this paper, we propose a liveness score-based regression network for overcoming the dependency on third party networks and users.
1 code implementation • 7 Dec 2022 • Jinyoung Park, Minseok Son, Seungju Cho, Inyoung Lee, Changick Kim
This paper presents a solution to the Weather4cast 2022 Challenge Stage 2.
1 code implementation • 25 Nov 2022 • Sangmin Woo, Sumin Lee, Yeonju Park, Muhammad Adi Nugroho, Changick Kim
We ask: how can we train a model that is robust to missing modalities?
no code implementations • 22 Nov 2022 • Minki Jeong, Changick Kim
The imbalanced data form a biased feature space, which deteriorates the performance of the recognition model.
Ranked #15 on
Long-tail Learning
on CIFAR-100-LT (ρ=10)
no code implementations • 15 Sep 2022 • Byeongjun Park, Hyojun Go, Changick Kim
Although recent methods generate high-quality novel views, synthesizing with only one explicit or implicit 3D geometry has a trade-off between two objectives that we call the "seesaw" problem: 1) preserving reprojected contents and 2) completing realistic out-of-view regions.
no code implementations • 31 Aug 2022 • JeongSoo Kim, Sangmin Woo, Byeongjun Park, Changick Kim
Camera traps, unmanned observation devices, and deep learning-based image recognition systems have greatly reduced human effort in collecting and analyzing wildlife images.
no code implementations • 24 Aug 2022 • Sumin Lee, Sangmin Woo, Yeonju Park, Muhammad Adi Nugroho, Changick Kim
In multi-modal action recognition, it is important to consider not only the complementary nature of different modalities but also global action content.
1 code implementation • CVPR 2022 • Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim
To tackle this limitation, we propose the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class.
no code implementations • 15 Feb 2022 • Byeongjun Park, JeongSoo Kim, Seungju Cho, Heeseon Kim, Changick Kim
Here, we propose a unified framework and introduce two datasets for long-tailed camera-trap recognition.
1 code implementation • 25 Jan 2022 • Sangmin Woo, Jinyoung Park, Inyong Koo, Sumin Lee, Minki Jeong, Changick Kim
To our surprise, we found that training schedule shows divide-and-conquer-like pattern: time segments are first diversified regardless of the target, then coupled with each target, and fine-tuned to the target again.
no code implementations • 8 Nov 2021 • Byeongjun Park, Taekyung Kim, Hyojun Go, Changick Kim
In this paper, we propose residual guidance loss that enables the depth estimation network to embed the discriminative feature by transferring the discriminability of auto-encoded features.
no code implementations • 8 Nov 2021 • Junyoung Byun, Hyojun Go, Changick Kim
We apply the GADA strategy to two existing attack methods and show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.
no code implementations • 8 Sep 2021 • Sumin Lee, Hyunjun Eun, Jinyoung Moon, Seokeon Choi, Yoonhyung Kim, Chanho Jung, Changick Kim
To overcome this problem, we propose a novel recurrent unit, named Information Discrimination Unit (IDU), which explicitly discriminates the information relevancy between an ongoing action and others to decide whether to accumulate the input information.
1 code implementation • 30 Aug 2021 • Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, Changick Kim
It significantly outperforms traditional and deep neural network-based methods in detecting and localizing tampered regions.
no code implementations • ICCV 2021 • Dongki Jung, Jaehoon Choi, Yonghan Lee, Deokhwa Kim, Changick Kim, Dinesh Manocha, Donghwan Lee
We present a novel approach for estimating depth from a monocular camera as it moves through complex and crowded indoor environments, e. g., a department store or a metro station.
no code implementations • 25 May 2021 • Inyong Koo, Minki Jeong, Changick Kim
In this work, we propose a novel framework that generates class representations by extracting features from class-relevant regions of the images.
no code implementations • 1 Apr 2021 • Yoonhyung Kim, Changick Kim
In SSDA, a small number of labeled target images are given for training, and the effectiveness of those data is demonstrated by the previous studies.
1 code implementation • CVPR 2021 • Minki Jeong, Seokeon Choi, Changick Kim
Based on the transformation consistency, our method measures the difference between the transformed prototypes and a modified prototype set.
no code implementations • 13 Jan 2021 • Junyoung Byun, Hyojun Go, Changick Kim
In this paper, we pay attention to an implicit assumption of query-based black-box adversarial attacks that the target model's output exactly corresponds to the query input.
no code implementations • ICCV 2021 • Taekyung Kim, Jaehoon Choi, Seokeon Choi, Dongki Jung, Changick Kim
We generate the spare ground truth of the DTU dataset for evaluation and extensive experiments verify that our SGT-MVSNet outperforms the state-of-the-art MVS methods on the sparse ground truth setting.
no code implementations • 1 Jan 2021 • Taekyung Kim, Changick Kim
We propose a photometric consistency loss, which directly enforces the geometrically consistent style texture across the view, and a stroke consistency loss, which matches the characteristics and directions of the brushstrokes by aligning the local patches of the corresponding pixels before minimizing feature deviation.
1 code implementation • 3 Dec 2020 • Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim
To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.
1 code implementation • CVPR 2021 • Seokeon Choi, Taekyung Kim, Minki Jeong, Hyoungseob Park, Changick Kim
To this end, we combine learnable batch-instance normalization layers with meta-learning and investigate the challenging cases caused by both batch and instance normalization layers.
1 code implementation • 6 Oct 2020 • Jaehoon Choi, Dongki Jung, Donghwan Lee, Changick Kim
In this paper, we propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss.
no code implementations • 6 Oct 2020 • Dongki Jung, Seunghan Yang, Jaehoon Choi, Changick Kim
Style transfer is the image synthesis task, which applies a style of one image to another while preserving the content.
2 code implementations • ECCV 2020 • Taekyung Kim, Changick Kim
Finally, the exploration scheme locally aligns features in a class-wise manner complementary to the attraction scheme by selectively aligning unlabeled target features complementary to the perturbation scheme.
no code implementations • 16 May 2020 • Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim
Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions.
no code implementations • 9 Mar 2020 • Hyunjun Eun, Daeyeong Kim, Chanho Jung, Changick Kim
Note that, instead of manual categorization requiring the heavy workload of radiologists, we propose to automatically categorize non-nodules based on the autoencoder and k-means clustering.
1 code implementation • CVPR 2020 • Hyunjun Eun, Jinyoung Moon, Jongyoul Park, Chanho Jung, Changick Kim
For online action detection, in this paper, we propose a novel recurrent unit to explicitly discriminate the information relevant to an ongoing action from others.
Ranked #7 on
Online Action Detection
on TVSeries
1 code implementation • CVPR 2020 • Seokeon Choi, Sumin Lee, Youngeun Kim, Taekyung Kim, Changick Kim
To implement our approach, we introduce an ID-preserving person image generation network and a hierarchical feature learning module.
no code implementations • 26 Nov 2019 • Hyunjun Eun, Sumin Lee, Jinyoung Moon, Jongyoul Park, Chanho Jung, Changick Kim
Recent temporal action proposal generation approaches have suggested integrating segment- and snippet score-based methodologies to produce proposals with high recall and accurate boundaries.
1 code implementation • 12 Oct 2019 • Seunghan Yang, Yoonhyung Kim, Youngeun Kim, Changick Kim
Most previous methods utilize the activation map corresponding to the highest activation source.
1 code implementation • 10 Oct 2019 • Junyoung Byun, Kyujin Shim, Changick Kim
Since insufficient bit-depth may generate annoying false contours or lose detailed visual appearance, bit-depth expansion (BDE) from low bit-depth (LBD) images to high bit-depth (HBD) images becomes more and more important.
no code implementations • 2 Oct 2019 • Youngeun Kim, Seunghyeon Kim, Taekyung Kim, Changick Kim
Note that each binary image consists of background and regions belonging to a class.
no code implementations • 29 Sep 2019 • Youngeun Kim, Seokeon Choi, Hankyeol Lee, Taekyung Kim, Changick Kim
In this paper, we introduce a self-supervised approach for video object segmentation without human labeled data. Specifically, we present Robust Pixel-level Matching Net-works (RPM-Net), a novel deep architecture that matches pixels between adjacent frames, using only color information from unlabeled videos for training.
no code implementations • 29 Sep 2019 • Youngeun Kim, Seokeon Choi, Taekyung Kim, Sumin Lee, Changick Kim
Since the cost of labeling increases dramatically as the number of cameras increases, it is difficult to apply the re-identification algorithm to a large camera network.
no code implementations • 25 Sep 2019 • Seungjun Jung, Junyoung Byun, Kyujin Shim, Changick Kim
Moreover, by modifying the VQA model’s answer through the output of the NLI model, we show that VQA performance increases by 1. 1% from the original model.
no code implementations • ICCV 2019 • Seunghyeon Kim, Jaehoon Choi, Taekyung Kim, Changick Kim
Experimental results show that our approach effectively improves the performance of the one-stage object detection in unsupervised domain adaptation setting.
no code implementations • ICCV 2019 • Jaehoon Choi, Taekyung Kim, Changick Kim
Unsupervised domain adaptation seeks to adapt the model trained on the source domain to the target domain.
1 code implementation • 28 Aug 2019 • Sumin Lee, Sungchan Oh, Chanho Jung, Changick Kim
To that end, in this paper, we propose a fashion landmark detection network with a global-local embedding module.
no code implementations • 1 Aug 2019 • Jaehoon Choi, Minki Jeong, Taekyung Kim, Changick Kim
To learn target discriminative representations, using pseudo-labels is a simple yet effective approach for unsupervised domain adaptation.
Semi-Supervised Image Classification
Unsupervised Domain Adaptation
no code implementations • CVPR 2019 • Taekyung Kim, Minki Jeong, Seunghyeon Kim, Seokeon Choi, Changick Kim
We construct a structured domain adaptation framework for our learning paradigm and introduce a practical way of DD for implementation.
1 code implementation • 22 Apr 2019 • Seungjun Jung, Muhammad Abul Hasan, Changick Kim
In this paper, we propose a novel algorithm to rectify illumination of the digitized documents by eliminating shading artifacts.
1 code implementation • 17 Oct 2017 • Li Yi, Lin Shao, Manolis Savva, Haibin Huang, Yang Zhou, Qirui Wang, Benjamin Graham, Martin Engelcke, Roman Klokov, Victor Lempitsky, Yuan Gan, Pengyu Wang, Kun Liu, Fenggen Yu, Panpan Shui, Bingyang Hu, Yan Zhang, Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Minki Jeong, Jaehoon Choi, Changick Kim, Angom Geetchandra, Narasimha Murthy, Bhargava Ramu, Bharadwaj Manda, M. Ramanathan, Gautam Kumar, P Preetham, Siddharth Srivastava, Swati Bhugra, Brejesh lall, Christian Haene, Shubham Tulsiani, Jitendra Malik, Jared Lafer, Ramsey Jones, Siyuan Li, Jie Lu, Shi Jin, Jingyi Yu, Qi-Xing Huang, Evangelos Kalogerakis, Silvio Savarese, Pat Hanrahan, Thomas Funkhouser, Hao Su, Leonidas Guibas
We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database.