no code implementations • 19 May 2024 • Sangyeop Yeo, Yoojin Jang, Jaejun Yoo
In this paper, we address the challenge of compressing generative adversarial networks (GANs) for deployment in resource-constrained environments by proposing two novel methodologies: Distribution Matching for Efficient compression (DiME) and Network Interactive Compression via Knowledge Exchange and Learning (NICKEL).
1 code implementation • 1 Apr 2024 • Jaejung Seol, Seojun Kim, Jaejun Yoo
Visual layout plays a critical role in graphic design fields such as advertising, posters, and web UI design.
no code implementations • 21 Feb 2024 • Kihong Kim, Haneol Lee, JiHye Park, Seyeon Kim, Kwanghee Lee, Seungryong Kim, Jaejun Yoo
Generating high-quality videos that synthesize desired realistic content is a challenging task due to their intricate high-dimensionality and complexity of videos.
1 code implementation • 30 Jan 2024 • Pum Jun Kim, Seojun Kim, Jaejun Yoo
To the best of our knowledge, STREAM is the first evaluation metric that can separately assess the temporal and spatial aspects of videos.
no code implementations • 29 Jan 2024 • Jeongho Min, Yejun Lee, Dongyoung Kim, Jaejun Yoo
To the best of our knowledge, we are the first to explore Domain Matching-based RefSR in remote sensing image processing.
no code implementations • 5 Sep 2023 • Dongyeun Lee, Chaewon Kim, Sangjoon Yu, Jaejun Yoo, Gyeong-Moon Park
One of the most challenging problems in audio-driven talking head generation is achieving high-fidelity detail while ensuring precise synchronization.
1 code implementation • NeurIPS 2023 • Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo
To the best of our knowledge, this is the first evaluation metric focused on the robust estimation of the support and provides its statistical consistency under noise.
no code implementations • 28 May 2023 • Simo Ryu, Seunghyun Seo, Jaejun Yoo
In this paper, we present an efficient method for storing fine-tuned models by leveraging the low-rank properties of weight residuals.
1 code implementation • CVPR 2023 • Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim
This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model.
no code implementations • 16 Dec 2022 • Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo
To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably.
1 code implementation • CVPR 2023 • JiHye Park, Sunwoo Kim, Soohyun Kim, Seokju Cho, Jaejun Yoo, Youngjung Uh, Seungryong Kim
Existing techniques for image-to-image translation commonly have suffered from two critical problems: heavy reliance on per-sample domain annotation and/or inability of handling multiple attributes per image.
1 code implementation • ICCV 2021 • Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim
To this end, we propose a truly unsupervised image-to-image translation model (TUNIT) that simultaneously learns to separate image domains and translates input images into the estimated domains.
5 code implementations • 5 May 2020 • Andreas Lugmayr, Martin Danelljan, Radu Timofte, Namhyuk Ahn, Dongwoon Bai, Jie Cai, Yun Cao, Junyang Chen, Kaihua Cheng, SeYoung Chun, Wei Deng, Mostafa El-Khamy, Chiu Man Ho, Xiaozhong Ji, Amin Kheradmand, Gwantae Kim, Hanseok Ko, Kanghyu Lee, Jungwon Lee, Hao Li, Ziluan Liu, Zhi-Song Liu, Shuai Liu, Yunhua Lu, Zibo Meng, Pablo Navarrete Michelini, Christian Micheloni, Kalpesh Prajapati, Haoyu Ren, Yong Hyeok Seo, Wan-Chi Siu, Kyung-Ah Sohn, Ying Tai, Rao Muhammad Umer, Shuangquan Wang, Huibing Wang, Timothy Haoning Wu, Hao-Ning Wu, Biao Yang, Fuzhi Yang, Jaejun Yoo, Tongtong Zhao, Yuanbo Zhou, Haijie Zhuo, Ziyao Zong, Xueyi Zou
This paper reviews the NTIRE 2020 challenge on real world super-resolution.
no code implementations • 23 Apr 2020 • Namhyuk Ahn, Jaejun Yoo, Kyung-Ah Sohn
In this paper, we tackle a fully unsupervised super-resolution problem, i. e., neither paired images nor ground truth HR images.
2 code implementations • CVPR 2020 • Jaejun Yoo, Namhyuk Ahn, Kyung-Ah Sohn
The key intuition of CutBlur is to enable a model to learn not only "how" but also "where" to super-resolve an image.
3 code implementations • ICML 2020 • Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, Jaejun Yoo
In this paper, we show that even the latest version of the precision and recall metrics are not reliable yet.
14 code implementations • CVPR 2020 • Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha
A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.
no code implementations • 15 Oct 2019 • YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo
We first assume that the priors of future samples can be generated in an independently and identically distributed (i. i. d.)
1 code implementation • 3 Oct 2019 • Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser
The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space.
no code implementations • ICLR 2019 • Jisung Hwang, Younghoon Kim, Sanghyuk Chun, Jaejun Yoo, Ji-Hoon Kim, Dongyoon Han, Jung-Woo Ha
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.
4 code implementations • ICCV 2019 • Jaejun Yoo, Youngjung Uh, Sanghyuk Chun, Byeongkyu Kang, Jung-Woo Ha
The key ingredient of our method is wavelet transforms that naturally fits in deep networks.
1 code implementation • ICLR 2019 • Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha
Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.
1 code implementation • 21 Dec 2018 • Jang-Hyun Kim, Jaejun Yoo, Sanghyuk Chun, Adrian Kim, Jung-Woo Ha
We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.
Audio and Speech Processing Sound
no code implementations • 2 Apr 2018 • Dongwook Lee, Jaejun Yoo, Sungho Tak, Jong Chul Ye
The proposed deep residual learning networks are composed of magnitude and phase networks that are separately trained.
no code implementations • 27 Feb 2018 • Jaejun Yoo, Abdul Wahab, Jong Chul Ye
An inverse elastic source problem with sparse measurements is of concern.
no code implementations • 4 Dec 2017 • Jaejun Yoo, Sohail Sabir, Duchang Heo, Kee Hyun Kim, Abdul Wahab, Yoonseok Choi, Seul-I Lee, Eun Young Chae, Hak Hee Kim, Young Min Bae, Young-wook Choi, Seungryong Cho, Jong Chul Ye
Diffuse optical tomography (DOT) has been investigated as an alternative imaging modality for breast cancer detection thanks to its excellent contrast to hemoglobin oxidization level.
1 code implementation • 31 Jul 2017 • Eunhee Kang, Jaejun Yoo, Jong Chul Ye
To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge.
1 code implementation • 3 Mar 2017 • Yo Seob Han, Jaejun Yoo, Jong Chul Ye
To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets.
no code implementations • 3 Mar 2017 • Dongwook Lee, Jaejun Yoo, Jong Chul Ye
Furthermore, the computational time is by order of magnitude faster.
1 code implementation • 19 Nov 2016 • Woong Bae, Jaejun Yoo, Jong Chul Ye
To address this issue, here we propose a novel feature space deep residual learning algorithm that outperforms the existing residual learning.
Ranked #6 on Color Image Denoising on CBSD68 sigma50
no code implementations • 19 Nov 2016 • Yo Seob Han, Jaejun Yoo, Jong Chul Ye
Recently, compressed sensing (CS) computed tomography (CT) using sparse projection views has been extensively investigated to reduce the potential risk of radiation to patient.