Search Results for author: Seonghyeon Nam

Found 16 papers, 6 papers with code

FSID: Fully Synthetic Image Denoising via Procedural Scene Generation

1 code implementation7 Dec 2022 Gyeongmin Choe, Beibei Du, Seonghyeon Nam, Xiaoyu Xiang, Bo Zhu, Rakesh Ranjan

To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks.

Image Denoising Scene Generation +1

Large Scale Multi-Illuminant (LSMI) Dataset for Developing White Balance Algorithm Under Mixed Illumination

1 code implementation ICCV 2021 Dongyoung Kim, Jinwoo Kim, Seonghyeon Nam, Dongwoo Lee, Yeonkyung Lee, Nahyup Kang, Hyong-Euk Lee, ByungIn Yoo, Jae-Joon Han, Seon Joo Kim

Images in our dataset are mostly captured with illuminants existing in the scene, and the ground truth illumination is computed by taking the difference between the images with different illumination combination.

Unsupervised Keypoint Learning for Guiding Class-Conditional Video Prediction

1 code implementation NeurIPS 2019 Yunji Kim, Seonghyeon Nam, In Cho, Seon Joo Kim

To generate future frames, we first detect keypoints of a moving object and predict future motion as a sequence of keypoints.

Video Prediction

Dense Interspecies Face Embedding

1 code implementation NeruIPS 2022 Sejong Yang, Subin Jeon, Seonghyeon Nam, Seon Joo Kim

There are three main obstacles for interspecies face understanding: (1) lack of animal data compared to human, (2) ambiguous connection between faces of various animals, and (3) extreme shape and style variance.

Image Manipulation Interspecies Facial Keypoint Transfer +2

Learning sRGB-to-Raw-RGB De-rendering with Content-Aware Metadata

1 code implementation CVPR 2022 Seonghyeon Nam, Abhijith Punnappurath, Marcus A. Brubaker, Michael S. Brown

Our experiments show that our learned sampling can adapt to the image content to produce better raw reconstructions than existing methods.

Raw reconstruction

Temporally smooth online action detection using cycle-consistent future anticipation

1 code implementation16 Apr 2021 Young Hwi Kim, Seonghyeon Nam, Seon Joo Kim

Many video understanding tasks work in the offline setting by assuming that the input video is given from the start to the end.

Autonomous Driving Online Action Detection +1

Modelling the Scene Dependent Imaging in Cameras with a Deep Neural Network

no code implementations ICCV 2017 Seonghyeon Nam, Seon Joo Kim

Often called as the radiometric calibration, the process of recovering RAW images from processed images (JPEG format in the sRGB color space) is essential for many computer vision tasks that rely on physically accurate radiance values.

Deblurring Image Deblurring

Deep Semantics-Aware Photo Adjustment

no code implementations26 Jun 2017 Seonghyeon Nam, Seon Joo Kim

Also, spatially varying photo adjustment methods have been studied by exploiting high-level features and semantic label maps.

Photo Retouching Scene Parsing

Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language

no code implementations NeurIPS 2018 Seonghyeon Nam, Yunji Kim, Seon Joo Kim

Our task aims to semantically modify visual attributes of an object in an image according to the text describing the new visual appearance.

Generative Adversarial Network

Cross-Identity Motion Transfer for Arbitrary Objects through Pose-Attentive Video Reassembling

no code implementations ECCV 2020 Subin Jeon, Seonghyeon Nam, Seoung Wug Oh, Seon Joo Kim

To reduce the training-testing discrepancy of the self-supervised learning, a novel cross-identity training scheme is additionally introduced.

Self-Supervised Learning

Neural Image Representations for Multi-Image Fusion and Layer Separation

no code implementations2 Aug 2021 Seonghyeon Nam, Marcus A. Brubaker, Michael S. Brown

We propose a framework for aligning and fusing multiple images into a single view using neural image representations (NIRs), also known as implicit or coordinate-based neural representations.

Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.