Search Results for author: Jaejun Yoo

Found 31 papers, 17 papers with code

Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation

no code implementations19 May 2024 Sangyeop Yeo, Yoojin Jang, Jaejun Yoo

In this paper, we address the challenge of compressing generative adversarial networks (GANs) for deployment in resource-constrained environments by proposing two novel methodologies: Distribution Matching for Efficient compression (DiME) and Network Interactive Compression via Knowledge Exchange and Learning (NICKEL).

Knowledge Distillation

PosterLlama: Bridging Design Ability of Langauge Model to Contents-Aware Layout Generation

1 code implementation1 Apr 2024 Jaejung Seol, Seojun Kim, Jaejun Yoo

Visual layout plays a critical role in graphic design fields such as advertising, posters, and web UI design.

Layout Design

Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation

no code implementations21 Feb 2024 Kihong Kim, Haneol Lee, JiHye Park, Seyeon Kim, Kwanghee Lee, Seungryong Kim, Jaejun Yoo

Generating high-quality videos that synthesize desired realistic content is a challenging task due to their intricate high-dimensionality and complexity of videos.

Video Generation Video Reconstruction

STREAM: Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models

1 code implementation30 Jan 2024 Pum Jun Kim, Seojun Kim, Jaejun Yoo

To the best of our knowledge, STREAM is the first evaluation metric that can separately assess the temporal and spatial aspects of videos.

Bridging the Domain Gap: A Simple Domain Matching Method for Reference-based Image Super-Resolution in Remote Sensing

no code implementations29 Jan 2024 Jeongho Min, Yejun Lee, Dongyoung Kim, Jaejun Yoo

To the best of our knowledge, we are the first to explore Domain Matching-based RefSR in remote sensing image processing.

Image Super-Resolution

RADIO: Reference-Agnostic Dubbing Video Synthesis

no code implementations5 Sep 2023 Dongyeun Lee, Chaewon Kim, Sangjoon Yu, Jaejun Yoo, Gyeong-Moon Park

One of the most challenging problems in audio-driven talking head generation is achieving high-fidelity detail while ensuring precise synchronization.

Decoder Talking Head Generation

TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models

1 code implementation NeurIPS 2023 Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo

To the best of our knowledge, this is the first evaluation metric focused on the robust estimation of the support and provides its statistical consistency under noise.

Diversity

Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals

no code implementations28 May 2023 Simo Ryu, Seunghyun Seo, Jaejun Yoo

In this paper, we present an efficient method for storing fine-tuned models by leveraging the low-rank properties of weight residuals.

Quantization

Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

1 code implementation CVPR 2023 Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim

This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model.

Transfer Learning Translation

Can We Find Strong Lottery Tickets in Generative Models?

no code implementations16 Dec 2022 Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo

To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably.

Model Compression Network Pruning

LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data

1 code implementation CVPR 2023 JiHye Park, Sunwoo Kim, Soohyun Kim, Seokju Cho, Jaejun Yoo, Youngjung Uh, Seungryong Kim

Existing techniques for image-to-image translation commonly have suffered from two critical problems: heavy reliance on per-sample domain annotation and/or inability of handling multiple attributes per image.

Translation Unsupervised Image-To-Image Translation

Rethinking the Truly Unsupervised Image-to-Image Translation

1 code implementation ICCV 2021 Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim

To this end, we propose a truly unsupervised image-to-image translation model (TUNIT) that simultaneously learns to separate image domains and translates input images into the estimated domains.

Translation Unsupervised Image-To-Image Translation

SimUSR: A Simple but Strong Baseline for Unsupervised Image Super-resolution

no code implementations23 Apr 2020 Namhyuk Ahn, Jaejun Yoo, Kyung-Ah Sohn

In this paper, we tackle a fully unsupervised super-resolution problem, i. e., neither paired images nor ground truth HR images.

Denoising Image Super-Resolution +1

StarGAN v2: Diverse Image Synthesis for Multiple Domains

14 code implementations CVPR 2020 Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.

Diversity Fundus to Angiography Generation +2

Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling

no code implementations15 Oct 2019 YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo

We first assume that the priors of future samples can be generated in an independently and identically distributed (i. i. d.)

Time-Dependent Deep Image Prior for Dynamic MRI

1 code implementation3 Oct 2019 Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser

The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space.

MRI Reconstruction

Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation

1 code implementation ICLR 2019 Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha

Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.

Question Generation Question-Generation +1

Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement

1 code implementation21 Dec 2018 Jang-Hyun Kim, Jaejun Yoo, Sanghyuk Chun, Adrian Kim, Jung-Woo Ha

We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.

Audio and Speech Processing Sound

Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks

no code implementations2 Apr 2018 Dongwook Lee, Jaejun Yoo, Sungho Tak, Jong Chul Ye

The proposed deep residual learning networks are composed of magnitude and phase networks that are separately trained.

Deep Learning Diffuse Optical Tomography

no code implementations4 Dec 2017 Jaejun Yoo, Sohail Sabir, Duchang Heo, Kee Hyun Kim, Abdul Wahab, Yoonseok Choi, Seul-I Lee, Eun Young Chae, Hak Hee Kim, Young Min Bae, Young-wook Choi, Seungryong Cho, Jong Chul Ye

Diffuse optical tomography (DOT) has been investigated as an alternative imaging modality for breast cancer detection thanks to its excellent contrast to hemoglobin oxidization level.

Breast Cancer Detection

Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network

1 code implementation31 Jul 2017 Eunhee Kang, Jaejun Yoo, Jong Chul Ye

To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge.

Denoising

Deep Learning with Domain Adaptation for Accelerated Projection-Reconstruction MR

1 code implementation3 Mar 2017 Yo Seob Han, Jaejun Yoo, Jong Chul Ye

To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets.

Computed Tomography (CT) Domain Adaptation

Deep artifact learning for compressed sensing and parallel MRI

no code implementations3 Mar 2017 Dongwook Lee, Jaejun Yoo, Jong Chul Ye

Furthermore, the computational time is by order of magnitude faster.

Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification

1 code implementation19 Nov 2016 Woong Bae, Jaejun Yoo, Jong Chul Ye

To address this issue, here we propose a novel feature space deep residual learning algorithm that outperforms the existing residual learning.

Color Image Denoising Image Restoration +1

Deep Residual Learning for Compressed Sensing CT Reconstruction via Persistent Homology Analysis

no code implementations19 Nov 2016 Yo Seob Han, Jaejun Yoo, Jong Chul Ye

Recently, compressed sensing (CS) computed tomography (CT) using sparse projection views has been extensively investigated to reduce the potential risk of radiation to patient.

Computed Tomography (CT) CT Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.