Search Results for author: Jihyong Oh

Found 8 papers, 5 papers with code

Deep SR-ITM: Joint Learning of Super-Resolution and Inverse Tone-Mapping for 4K UHD HDR Applications

1 code implementation ICCV 2019 Soo Ye Kim, Jihyong Oh, Munchurl Kim

Joint SR and ITM is an intricate task, where high frequency details must be restored for SR, jointly with the local contrast, for ITM.

4k 8k +4

JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video

1 code implementation10 Sep 2019 Soo Ye Kim, Jihyong Oh, Munchurl Kim

Joint learning of super-resolution (SR) and inverse tone-mapping (ITM) has been explored recently, to convert legacy low resolution (LR) standard dynamic range (SDR) videos to high resolution (HR) high dynamic range (HDR) videos for the growing need of UHD HDR TV/broadcasting applications.

Image Reconstruction Inverse-Tone-Mapping +2

FISR: Deep Joint Frame Interpolation and Super-Resolution with a Multi-scale Temporal Loss

1 code implementation16 Dec 2019 Soo Ye Kim, Jihyong Oh, Munchurl Kim

In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.

2k 4k +4

PeaceGAN: A GAN-based Multi-Task Learning Method for SAR Target Image Generation with a Pose Estimator and an Auxiliary Classifier

no code implementations29 Mar 2021 Jihyong Oh, Munchurl Kim

In this paper, we firstly propose a novel GAN-based multi-task learning (MTL) method for SAR target image generation, called PeaceGAN that uses both pose angle and target class information, which makes it possible to produce SAR target images of desired target classes at intended pose angles.

Image Generation Multi-Task Learning

XVFI: eXtreme Video Frame Interpolation

1 code implementation ICCV 2021 Hyeonjun Sim, Jihyong Oh, Munchurl Kim

In this paper, we firstly present a dataset (X4K1000FPS) of 4K videos of 1000 fps with the extreme motion to the research community for video frame interpolation (VFI), and propose an extreme VFI network, called XVFI-Net, that first handles the VFI for 4K videos with large motion.

4k eXtreme-Video-Frame-Interpolation +1

DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting

1 code implementation19 Nov 2021 Jihyong Oh, Munchurl Kim

In this paper, we propose a novel joint deblurring and multi-frame interpolation (DeMFI) framework, called DeMFI-Net, which accurately converts blurry videos of lower-frame-rate to sharp videos at higher-frame-rate based on flow-guided attentive-correlation-based feature bolstering (FAC-FB) module and recursive boosting (RB), in terms of multi-frame interpolation (MFI).

Deblurring Video Enhancement +2

DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video

no code implementations21 Dec 2023 Minh-Quan Viet Bui, Jongmin Park, Jihyong Oh, Munchurl Kim

In response, we propose a novel dynamic deblurring NeRF framework for blurry monocular video, called DyBluRF, consisting of a Base Ray Initialization (BRI) stage and a Motion Decomposition-based Deblurring (MDD) stage.

Deblurring Novel View Synthesis

FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

no code implementations8 Jan 2024 Geunhyuk Youk, Jihyong Oh, Munchurl Kim

In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net.

Deblurring Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.