Search Results for author: Hyeongmin Lee

Found 12 papers, 5 papers with code

CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment

no code implementations1 Apr 2024 Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho

Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment.

Image Enhancement

UGPNet: Universal Generative Prior for Image Restoration

no code implementations31 Dec 2023 Hwayoon Lee, Kyoungkook Kang, Hyeongmin Lee, Seung-Hwan Baek, Sunghyun Cho

UGPNet first restores the image structure of a degraded input using a regression model and synthesizes a perceptually-realistic image with a generative model on top of the regressed output.

Deblurring Denoising +3

Expanded Adaptive Scaling Normalization for End to End Image Compression

1 code implementation5 Aug 2022 Chajin Shin, Hyeongmin Lee, Hanbin Son, Sangjin Lee, Dogyoon Lee, Sangyoun Lee

Then, we increase the receptive field to make the adaptive rescaling module consider the spatial correlation.

Image Compression

N-RPN: Hard Example Learning for Region Proposal Networks

no code implementations3 Aug 2022 MyeongAh Cho, Tae-young Chung, Hyeongmin Lee, Sangyoun Lee

The region proposal task is to generate a set of candidate regions that contain an object.

Region Proposal

Exploring Discontinuity for Video Frame Interpolation

1 code implementation CVPR 2023 Sangjin Lee, Hyeongmin Lee, Chajin Shin, Hanbin Son, Sangyoun Lee

Lastly, we propose loss functions to give supervisions of the discontinuous motion areas which can be applied along with FTM and D-map.

Data Augmentation Video Frame Interpolation

Smoother Network Tuning and Interpolation for Continuous-level Image Processing

no code implementations5 Oct 2020 Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon, Sangyoun Lee

Extensive results for various image processing tasks indicate that the performance of FTN is comparable in multiple continuous levels, and is significantly smoother and lighter than that of other frameworks.

Learning Temporally Invariant and Localizable Features via Data Augmentation for Video Recognition

1 code implementation13 Aug 2020 Taeoh Kim, Hyeongmin Lee, MyeongAh Cho, Ho Seong Lee, Dong Heon Cho, Sangyoun Lee

Based on our novel temporal data augmentation algorithms, video recognition performances are improved using only a limited amount of training data compared to the spatial-only data augmentation algorithms, including the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge.

Action Recognition Data Augmentation +1

Extrapolative-Interpolative Cycle-Consistency Learning for Video Frame Extrapolation

no code implementations27 May 2020 Sangjin Lee, Hyeongmin Lee, Taeoh Kim, Sangyoun Lee

Unlike previous studies that usually have been focused on the design of modules or construction of networks, we propose a novel Extrapolative-Interpolative Cycle (EIC) loss using pre-trained frame interpolation module to improve extrapolation performance.

Regularized Adaptation for Stable and Efficient Continuous-Level Learning on Image Processing Networks

no code implementations11 Mar 2020 Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon, Sangyoun Lee

In this paper, we propose a novel continuous-level learning framework using a Filter Transition Network (FTN) which is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.

Cannot find the paper you are looking for? You can Submit a new open access paper.