Search Results for author: Jaejun Yoo

Found 24 papers, 14 papers with code

Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals

no code implementations28 May 2023 Simo Ryu, Seunghyun Seo, Jaejun Yoo

In this paper, we present an efficient method for storing fine-tuned models by leveraging the low-rank properties of weight residuals.


Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

1 code implementation CVPR 2023 Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim

This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model.

Transfer Learning Translation

Can We Find Strong Lottery Tickets in Generative Models?

no code implementations16 Dec 2022 Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo

To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably.

Model Compression Network Pruning

LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data

1 code implementation CVPR 2023 JiHye Park, Sunwoo Kim, Soohyun Kim, Seokju Cho, Jaejun Yoo, Youngjung Uh, Seungryong Kim

Existing techniques for image-to-image translation commonly have suffered from two critical problems: heavy reliance on per-sample domain annotation and/or inability of handling multiple attributes per image.

Translation Unsupervised Image-To-Image Translation

Rethinking the Truly Unsupervised Image-to-Image Translation

1 code implementation ICCV 2021 Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim

To this end, we propose a truly unsupervised image-to-image translation model (TUNIT) that simultaneously learns to separate image domains and translates input images into the estimated domains.

Translation Unsupervised Image-To-Image Translation

SimUSR: A Simple but Strong Baseline for Unsupervised Image Super-resolution

no code implementations23 Apr 2020 Namhyuk Ahn, Jaejun Yoo, Kyung-Ah Sohn

In this paper, we tackle a fully unsupervised super-resolution problem, i. e., neither paired images nor ground truth HR images.

Denoising Image Super-Resolution +1

Reliable Fidelity and Diversity Metrics for Generative Models

2 code implementations ICML 2020 Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, Jaejun Yoo

In this paper, we show that even the latest version of the precision and recall metrics are not reliable yet.

Image Generation

StarGAN v2: Diverse Image Synthesis for Multiple Domains

13 code implementations CVPR 2020 Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.

Fundus to Angiography Generation Multimodal Unsupervised Image-To-Image Translation +1

Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling

no code implementations15 Oct 2019 YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo

We first assume that the priors of future samples can be generated in an independently and identically distributed (i. i. d.)

Time-Dependent Deep Image Prior for Dynamic MRI

1 code implementation3 Oct 2019 Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser

The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space.

MRI Reconstruction

Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation

1 code implementation ICLR 2019 Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha

Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.

Question Generation Question-Generation +1

Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement

1 code implementation21 Dec 2018 Jang-Hyun Kim, Jaejun Yoo, Sanghyuk Chun, Adrian Kim, Jung-Woo Ha

We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.

Audio and Speech Processing Sound

Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks

no code implementations2 Apr 2018 Dongwook Lee, Jaejun Yoo, Sungho Tak, Jong Chul Ye

The proposed deep residual learning networks are composed of magnitude and phase networks that are separately trained.

Deep Learning Diffuse Optical Tomography

no code implementations4 Dec 2017 Jaejun Yoo, Sohail Sabir, Duchang Heo, Kee Hyun Kim, Abdul Wahab, Yoonseok Choi, Seul-I Lee, Eun Young Chae, Hak Hee Kim, Young Min Bae, Young-wook Choi, Seungryong Cho, Jong Chul Ye

Diffuse optical tomography (DOT) has been investigated as an alternative imaging modality for breast cancer detection thanks to its excellent contrast to hemoglobin oxidization level.

Breast Cancer Detection

Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network

1 code implementation31 Jul 2017 Eunhee Kang, Jaejun Yoo, Jong Chul Ye

To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge.


Deep Learning with Domain Adaptation for Accelerated Projection-Reconstruction MR

1 code implementation3 Mar 2017 Yo Seob Han, Jaejun Yoo, Jong Chul Ye

To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets.

Computed Tomography (CT) Domain Adaptation

Deep artifact learning for compressed sensing and parallel MRI

no code implementations3 Mar 2017 Dongwook Lee, Jaejun Yoo, Jong Chul Ye

Furthermore, the computational time is by order of magnitude faster.

Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification

1 code implementation19 Nov 2016 Woong Bae, Jaejun Yoo, Jong Chul Ye

To address this issue, here we propose a novel feature space deep residual learning algorithm that outperforms the existing residual learning.

Color Image Denoising Image Restoration +1

Deep Residual Learning for Compressed Sensing CT Reconstruction via Persistent Homology Analysis

no code implementations19 Nov 2016 Yo Seob Han, Jaejun Yoo, Jong Chul Ye

Recently, compressed sensing (CS) computed tomography (CT) using sparse projection views has been extensively investigated to reduce the potential risk of radiation to patient.

Computed Tomography (CT) Image Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.