Search Results for author: Jihyong Oh

Found 8 papers, 5 papers with code

FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

no code implementations8 Jan 2024 Geunhyuk Youk, Jihyong Oh, Munchurl Kim

In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net.

Deblurring Representation Learning +1

DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video

no code implementations21 Dec 2023 Minh-Quan Viet Bui, Jongmin Park, Jihyong Oh, Munchurl Kim

The MDD stage is a novel incremental latent sharp-rays prediction (ILSP) approach for the blurry monocular video frames by decomposing the latent sharp rays into global camera motion and local object motion components.

Deblurring Novel View Synthesis

DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting

1 code implementation19 Nov 2021 Jihyong Oh, Munchurl Kim

In this paper, we propose a novel joint deblurring and multi-frame interpolation (DeMFI) framework, called DeMFI-Net, which accurately converts blurry videos of lower-frame-rate to sharp videos at higher-frame-rate based on flow-guided attentive-correlation-based feature bolstering (FAC-FB) module and recursive boosting (RB), in terms of multi-frame interpolation (MFI).

Deblurring Video Enhancement +2

XVFI: eXtreme Video Frame Interpolation

1 code implementation ICCV 2021 Hyeonjun Sim, Jihyong Oh, Munchurl Kim

In this paper, we firstly present a dataset (X4K1000FPS) of 4K videos of 1000 fps with the extreme motion to the research community for video frame interpolation (VFI), and propose an extreme VFI network, called XVFI-Net, that first handles the VFI for 4K videos with large motion.

eXtreme-Video-Frame-Interpolation Optical Flow Estimation

PeaceGAN: A GAN-based Multi-Task Learning Method for SAR Target Image Generation with a Pose Estimator and an Auxiliary Classifier

no code implementations29 Mar 2021 Jihyong Oh, Munchurl Kim

In this paper, we firstly propose a novel GAN-based multi-task learning (MTL) method for SAR target image generation, called PeaceGAN that uses both pose angle and target class information, which makes it possible to produce SAR target images of desired target classes at intended pose angles.

Image Generation Multi-Task Learning

FISR: Deep Joint Frame Interpolation and Super-Resolution with a Multi-scale Temporal Loss

1 code implementation16 Dec 2019 Soo Ye Kim, Jihyong Oh, Munchurl Kim

In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.

Space-time Video Super-resolution Video Frame Interpolation +1

JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video

1 code implementation10 Sep 2019 Soo Ye Kim, Jihyong Oh, Munchurl Kim

Joint learning of super-resolution (SR) and inverse tone-mapping (ITM) has been explored recently, to convert legacy low resolution (LR) standard dynamic range (SDR) videos to high resolution (HR) high dynamic range (HDR) videos for the growing need of UHD HDR TV/broadcasting applications.

Image Reconstruction Inverse-Tone-Mapping +2

Cannot find the paper you are looking for? You can Submit a new open access paper.