1 code implementation • CVPR 2023 • Jeeseung Park, Jin-Woo Park, Jong-Seok Lee
First, we propose a novel feature extraction method suitable for the Vision Transformer backbone, called masking with overlapped area (MOA) module.
Ranked #4 on
Human-Object Interaction Detection
on HICO-DET
no code implementations • 20 Oct 2022 • Jaehui Hwang, Dongyoon Han, Byeongho Heo, Song Park, Sanghyuk Chun, Jong-Seok Lee
In recent years, a huge amount of deep neural architectures have been developed for image classification.
no code implementations • 11 Oct 2022 • Juyeop Kim, Junha Park, Songkuk Kim, Jong-Seok Lee
In this paper, we focus on the phenomenon that Transformers show higher robustness against corruptions than CNNs, while not being overconfident (in fact, we find Transformers are actually underconfident).
no code implementations • 20 Aug 2022 • Gihyun Kim, Jong-Seok Lee
Second, by noting that Transformers and CNNs rely on different types of information in images, we formulate an attack framework, called Fourier attack, as a tool for implementing flexible attacks, where an image can be attacked in the spectral domain as well as in the spatial domain.
no code implementations • 20 Aug 2022 • Hyeongnam Jang, Yeejin Lee, Jong-Seok Lee
We use the probability of being uncertain to define an intuitive metric of subjectivity.
no code implementations • 19 Aug 2022 • Junghyuk Lee, Jun-Hyuk Kim, Jong-Seok Lee
Our results indicate that the features from random networks can evaluate generative models well similarly to those from trained networks, and furthermore, the two types of features can be used together in a complementary way.
no code implementations • 15 Dec 2021 • Jaehui Hwang, huan zhang, Jun-Ho Choi, Cho-Jui Hsieh, Jong-Seok Lee
Another observation enabling our defense method is that adversarial perturbations on videos are sensitive to temporal destruction.
no code implementations • 9 Dec 2021 • Juyeop Kim, Jun-Ho Choi, Soobeom Jang, Jong-Seok Lee
While adversarial perturbation of images to attack deep image classification models pose serious security concerns in practice, this paper suggests a novel paradigm where the concept of image perturbation can benefit classification performance, which we call amicable aid.
1 code implementation • CVPR 2022 • Jun-Hyuk Kim, Byeongho Heo, Jong-Seok Lee
Recently, learned image compression methods have outperformed traditional hand-crafted ones including BPG.
no code implementations • 18 Jun 2021 • Kyulim Kim, JeongSoo Kim, Seungri Song, Jun-Ho Choi, Chulmin Joo, Jong-Seok Lee
We present experiments based on both simulation and a real hardware optical system, from which the feasibility of the proposed optical attack is demonstrated.
no code implementations • 30 Apr 2021 • Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee
Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated.
no code implementations • 30 Apr 2021 • Junghyuk Lee, Jong-Seok Lee
The Frech\'et Inception distance is one of the most widely used metrics for evaluation of GANs, which assumes that the features from a trained Inception model for a set of images follow a normal distribution.
1 code implementation • 1 Apr 2021 • Hojung Lee, Jong-Seok Lee
This paper proposes a novel knowledge distillation-based learning method to improve the classification performance of convolutional neural networks (CNNs) without a pre-trained teacher network, called exit-ensemble distillation.
1 code implementation • 3 Feb 2021 • Hojung Lee, Cho-Jui Hsieh, Jong-Seok Lee
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
no code implementations • 18 Jan 2021 • Seong-Eun Moon, Chun-Jui Chen, Cho-Jui Hsieh, Jane-Ling Wang, Jong-Seok Lee
Convolutional neural networks (CNNs) are widely used to recognize the user's state through electroencephalography (EEG) signals.
no code implementations • ICCV 2021 • Jaehui Hwang, Jun-Hyuk Kim, Jun-Ho Choi, Jong-Seok Lee
In this paper, we study the structural vulnerability of deep learning-based action recognition models against the adversarial attack using the one frame attack that adds an inconspicuous perturbation to only a single frame of a given video clip.
no code implementations • 25 Sep 2020 • Pengxu Wei, Hannan Lu, Radu Timofte, Liang Lin, WangMeng Zuo, Zhihong Pan, Baopu Li, Teng Xi, Yanwen Fan, Gang Zhang, Jingtuo Liu, Junyu Han, Errui Ding, Tangxin Xie, Liang Cao, Yan Zou, Yi Shen, Jialiang Zhang, Yu Jia, Kaihua Cheng, Chenhuan Wu, Yue Lin, Cen Liu, Yunbo Peng, Xueyi Zou, Zhipeng Luo, Yuehan Yao, Zhenyu Xu, Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Tongtong Zhao, Shanshan Zhao, Yoseob Han, Byung-Hoon Kim, JaeHyun Baek, HaoNing Wu, Dejia Xu, Bo Zhou, Wei Guan, Xiaobo Li, Chen Ye, Hao Li, Yukai Shi, Zhijing Yang, Xiaojun Yang, Haoyu Zhong, Xin Li, Xin Jin, Yaojun Wu, Yingxue Pang, Sen Liu, Zhi-Song Liu, Li-Wen Wang, Chu-Tak Li, Marie-Paule Cani, Wan-Chi Siu, Yuanbo Zhou, Rao Muhammad Umer, Christian Micheloni, Xiaofeng Cong, Rajat Gupta, Keon-Hee Ahn, Jun-Hyuk Kim, Jun-Ho Choi, Jong-Seok Lee, Feras Almasri, Thomas Vandamme, Olivier Debeir
This paper introduces the real image Super-Resolution (SR) challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2020.
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
no code implementations • 15 Jul 2020 • Kashmira Shinde, Jong-Seok Lee, Matthias Humt, Aydin Sezgin, Rudolph Triebel
This paper presents an end-to-end multi-modal learning approach for monocular Visual-Inertial Odometry (VIO), which is specifically designed to exploit sensor complementarity in the light of sensor degradation scenarios.
1 code implementation • 2 Jun 2020 • Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee
In addition, SRZoo provides platform-agnostic image reconstruction tools to obtain super-resolved images and evaluate the performance in place.
Image and Video Processing Multimedia
no code implementations • 29 Apr 2020 • Jun-Ho Choi, Jong-Seok Lee
Human activity recognition using multiple sensors is a challenging but promising task in recent decades.
1 code implementation • 28 May 2019 • Soobeom Jang, Seong-Eun Moon, Jong-Seok Lee
Electroencephalography (EEG) is a useful way to implicitly monitor the users perceptual state during multimedia consumption.
2 code implementations • 19 Apr 2019 • Jun-Ho Choi, Jong-Seok Lee
Classification using multimodal data arises in many machine learning applications.
1 code implementation • ICCV 2019 • Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee
Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.
2 code implementations • 30 Nov 2018 • Jun-Ho Choi, Jun-Hyuk Kim, Manri Cheon, Jong-Seok Lee
Recently, several deep learning-based image super-resolution methods have been developed by stacking massive numbers of layers.
Ranked #25 on
Image Super-Resolution
on BSD100 - 4x upscaling
3 code implementations • 29 Nov 2018 • Jun-Hyuk Kim, Jun-Ho Choi, Manri Cheon, Jong-Seok Lee
Specifically, we propose a multi-path adaptive modulation block (MAMB), which is a lightweight yet effective residual block that adaptively modulates residual feature responses by fully exploiting their information via three paths.
Ranked #25 on
Image Super-Resolution
on Urban100 - 4x upscaling
1 code implementation • 13 Sep 2018 • Jun-Ho Choi, Jun-Hyuk Kim, Manri Cheon, Jong-Seok Lee
Recently, it has been shown that in super-resolution, there exists a tradeoff relationship between the quantitative and perceptual quality of super-resolved images, which correspond to the similarity to the ground-truth images and the naturalness, respectively.
Ranked #49 on
Image Super-Resolution
on BSD100 - 4x upscaling
1 code implementation • 13 Sep 2018 • Manri Cheon, Jun-Hyuk Kim, Jun-Ho Choi, Jong-Seok Lee
In this paper, we propose a deep generative adversarial network for super-resolution considering the trade-off between perception and distortion.
no code implementations • 12 Sep 2018 • Soobeom Jang, Seong-Eun Moon, Jong-Seok Lee
This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification.
no code implementations • 12 Sep 2018 • Seong-Eun Moon, Soobeom Jang, Jong-Seok Lee
Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric services.
no code implementations • 11 Sep 2018 • Seong-Eun Moon, Soobeom Jang, Jong-Seok Lee
Evaluation of quality of experience (QoE) based on electroencephalography (EEG) has received great attention due to its capability of real-time QoE monitoring of users.
no code implementations • ICLR 2019 • Hojung Lee, Jong-Seok Lee
This paper proposes a novel approach to train deep neural networks by unlocking the layer-wise dependency of backpropagation training.