no code implementations • 25 Nov 2022 • Heegon Jin, Jongwon Choi
Although transformer networks are recently employed in various vision tasks with outperforming performance, extensive training data and a lengthy training time are required to train a model to disregard an inductive bias.
no code implementations • 1 Oct 2022 • Bowen Yi, Romeo Ortega, Jongwon Choi, Kwanghee Nam
In a recent paper [18] the authors proposed the first solution to the problem of designing a {\em globally exponentially stable} (GES) flux observer for the interior permanent magnet synchronous motor.
no code implementations • 7 Feb 2022 • Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Jongwon Choi
For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.
no code implementations • 12 Nov 2021 • Yonghyun Jeong, Doyeon Kim, Pyounggeon Kim, Youngmin Ro, Jongwon Choi
Although the recent advancement in generative models brings diverse advantages to society, it can also be abused with malicious purposes, such as fraud, defamation, and fake news.
no code implementations • 8 Oct 2021 • JoonHyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi
Image-mixing augmentations (e. g., Mixup and CutMix), which typically involve mixing two images, have become the de-facto training techniques for image classification.
no code implementations • 6 Oct 2021 • Yonghyun Jeong, Doyeon Kim, Jaehyeon Lee, Minki Hong, Solbi Hwang, Jongwon Choi
When images are recaptured on display screens, various patterns differing by the screens as known as the moir\'e patterns can be also captured in spoof images.
no code implementations • 16 Aug 2021 • Yonghyun Jeong, Doyeon Kim, Seungjai Min, Seongho Joe, Youngjune Gwon, Jongwon Choi
The advancement in numerous generative models has a two-fold effect: a simple and easy generation of realistic synthesized images, but also an increased risk of malicious abuse of those images.
no code implementations • CVPR 2021 • Jongwon Choi, Kwang Moo Yi, Ji-Hoon Kim, Jinho Choo, Byoungjip Kim, Jin-Yeop Chang, Youngjune Gwon, Hyung Jin Chang
We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art.
no code implementations • 30 May 2019 • Dae Ung Jo, ByeongJu Lee, Jongwon Choi, Haanju Yoo, Jin Young Choi
We formulate the cross-modal association in Bayesian inference framework realized by a deep neural network with multiple variational auto-encoders and variational associators.
1 code implementation • 18 Jan 2019 • Youngmin Ro, Jongwon Choi, Dae Ung Jo, Byeongho Heo, Jongin Lim, Jin Young Choi
Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks.
1 code implementation • CVPR 2018 • Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi
We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers.
Ranked #14 on
Visual Object Tracking
on VOT2017/18
1 code implementation • IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017 • Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, Jin Young Choi
We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency.
1 code implementation • CVPR 2017 • Sangdoo Yun, Jongwon Choi, Youngjoon Yoo, Kimin Yun, Jin Young Choi
In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale.
no code implementations • CVPR 2016 • Jongwon Choi, Hyung Jin Chang, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi
In this paper, we present a novel attention-modulated visual tracking algorithm that decomposes an object into multiple cognitive units, and trains multiple elementary trackers in order to modulate the distribution of attention according to various feature and kernel types.