Search Results for author: Sungyong Baik

Found 12 papers, 6 papers with code

Rethinking RGB Color Representation for Image Restoration Models

no code implementations5 Feb 2024 Jaerin Lee, JoonKyu Park, Sungyong Baik, Kyoung Mu Lee

Image restoration models are typically trained with a pixel-wise distance loss defined over the RGB color representation space, which is well known to be a source of blurry and unrealistic textures in the restored images.

Image Restoration

MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields

no code implementations16 Dec 2022 JaeYoung Chung, Kanggeon Lee, Sungyong Baik, Kyoung Mu Lee

Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data.

Incremental Learning

CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution

1 code implementation21 Jul 2022 Cheeun Hong, Sungyong Baik, Heewon Kim, Seungjun Nah, Kyoung Mu Lee

In this work, to achieve high average bit-reduction with less accuracy loss, we propose a novel Content-Aware Dynamic Quantization (CADyQ) method for SR networks that allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.

Image Super-Resolution Quantization

Batch Normalization Tells You Which Filter is Important

no code implementations2 Dec 2021 Junghun Oh, Heewon Kim, Sungyong Baik, Cheeun Hong, Kyoung Mu Lee

The goal of filter pruning is to search for unimportant filters to remove in order to make convolutional neural networks (CNNs) efficient without sacrificing the performance in the process.

Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning

1 code implementation ICCV 2021 Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaesik Min, Kyoung Mu Lee

The problem lies in that each application and task may require different auxiliary loss function, especially when tasks are diverse and distinct.

Few-Shot Learning

Searching for Controllable Image Restoration Networks

no code implementations ICCV 2021 Heewon Kim, Sungyong Baik, Myungsub Choi, Janghoon Choi, Kyoung Mu Lee

Diverse user preferences over images have recently led to a great amount of interest in controlling the imagery effects for image restoration tasks.

4k Image Restoration +1

DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks

2 code implementations21 Dec 2020 Cheeun Hong, Heewon Kim, Sungyong Baik, Junghun Oh, Kyoung Mu Lee

Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.

Image Super-Resolution Quantization

Meta-Learning with Adaptive Hyperparameters

2 code implementations NeurIPS 2020 Sungyong Baik, Myungsub Choi, Janghoon Choi, Heewon Kim, Kyoung Mu Lee

Despite its popularity, several recent works question the effectiveness of MAML when test tasks are different from training tasks, thus suggesting various task-conditioned methodology to improve the initialization.

Few-Shot Learning

Domain Adaptation of Learned Features for Visual Localization

no code implementations21 Aug 2020 Sungyong Baik, Hyo Jin Kim, Tianwei Shen, Eddy Ilg, Kyoung Mu Lee, Chris Sweeney

We tackle the problem of visual localization under changing conditions, such as time of day, weather, and seasons.

Domain Adaptation Visual Localization

Scene-Adaptive Video Frame Interpolation via Meta-Learning

1 code implementation CVPR 2020 Myungsub Choi, Janghoon Choi, Sungyong Baik, Tae Hyun Kim, Kyoung Mu Lee

Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.

Meta-Learning Test-time Adaptation +1

Learning to Forget for Meta-Learning

1 code implementation CVPR 2020 Sungyong Baik, Seokil Hong, Kyoung Mu Lee

Model-agnostic meta-learning (MAML) tackles the problem by formulating prior knowledge as a common initialization across tasks, which is then used to quickly adapt to unseen tasks.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.