Search Results for author: Yu-Lun Liu

Found 36 papers, 18 papers with code

GCC: Generative Color Constancy via Diffusing a Color Checker

no code implementations24 Feb 2025 Chen-Wei Chang, Cheng-De Fan, Chia-Che Chang, Yi-Chen Lo, Yu-Chee Tseng, Jiun-Long Huang, Yu-Lun Liu

Color constancy methods often struggle to generalize across different camera sensors due to varying spectral sensitivities.

Color Constancy Data Augmentation

AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting

no code implementations7 Feb 2025 Chung-Ho Wu, Yang-Jung Chen, Ying-Huan Chen, Jie-Ying Lee, Bo-Hsu Ke, Chun-Wei Tuan Mu, Yi-Chuan Huang, Chin-Yang Lin, Min-Hung Chen, Yen-Yu Lin, Yu-Lun Liu

Three-dimensional scene inpainting is crucial for applications from virtual reality to architectural visualization, yet existing methods struggle with view consistency and geometric accuracy in 360{\deg} unbounded scenes.

CorrFill: Enhancing Faithfulness in Reference-based Inpainting with Correspondence Guidance in Diffusion Models

no code implementations4 Jan 2025 Kuan-Hung Liu, Cheng-Kun Yang, Min-Hung Chen, Yu-Lun Liu, Yen-Yu Lin

In the task of reference-based image inpainting, an additional reference image is provided to restore a damaged target image to its original state.

Image Inpainting

ReF-LDM: A Latent Diffusion Model for Reference-based Face Image Restoration

no code implementations6 Dec 2024 Chi-Wei Hsiao, Yu-Lun Liu, Cheng-Kun Yang, Sheng-Po Kuo, Kevin Jou, Chia-Ping Chen

While recent works on blind face image restoration have successfully produced impressive high-quality (HQ) images with abundant details from low-quality (LQ) input images, the generated content may not accurately reflect the real appearance of a person.

Image Restoration

FIPER: Generalizable Factorized Features for Robust Low-Level Vision Models

no code implementations23 Oct 2024 Yang-Che Sun, Cheng Yu Yeo, Ernie Chu, Jun-Cheng Chen, Yu-Lun Liu

In this work, we propose using a unified representation, termed Factorized Features, for low-level vision tasks, where we test on Single Image Super-Resolution (SISR) and Image Compression.

Image Compression Image Super-Resolution

SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes

no code implementations22 Oct 2024 Cheng-De Fan, Chen-Wei Chang, Yi-Ruei Liu, Jie-Ying Lee, Jiun-Long Huang, Yu-Chee Tseng, Yu-Lun Liu

We present SpectroMotion, a novel approach that combines 3D Gaussian Splatting (3DGS) with physically-based rendering (PBR) and deformation fields to reconstruct dynamic specular scenes.

3DGS 3D Reconstruction

FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors

no code implementations21 Oct 2024 Chin-Yang Lin, Chung-Ho Wu, Chang-Han Yeh, Shih-Han Yen, Cheng Sun, Yu-Lun Liu

Neural Radiance Fields (NeRF) face significant challenges in extreme few-shot scenarios, primarily due to overfitting and long training times.

3D Scene Reconstruction NeRF +2

Precise Pick-and-Place using Score-Based Diffusion Networks

no code implementations15 Sep 2024 Shih-Wei Guo, Tsu-Ching Hsiao, Yu-Lun Liu, Chun-Yi Lee

In this paper, we propose a novel coarse-to-fine continuous pose diffusion method to enhance the precision of pick-and-place operations within robotic manipulation tasks.

Pose Estimation

Matting by Generation

no code implementations30 Jul 2024 Zhixiang Wang, Baiang Li, Jian Wang, Yu-Lun Liu, Jinwei Gu, Yung-Yu Chuang, Shin'ichi Satoh

This paper introduces an innovative approach for image matting that redefines the traditional regression-based task as a generative modeling challenge.

Image Matting

BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes

no code implementations22 Jul 2024 Chih-Hai Su, Chih-Yao Hu, Shr-Ruei Tsai, Jie-Ying Lee, Chin-Yang Lin, Yu-Lun Liu

This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes.

NeRF

GenRC: Generative 3D Room Completion from Sparse Image Collections

1 code implementation17 Jul 2024 Ming-Feng Li, Yueh-Feng Ku, Hong-Xuan Yen, Chi Liu, Yu-Lun Liu, Albert Y. C. Chen, Cheng-Hao Kuo, Min Sun

GenRC outperforms state-of-the-art methods under most appearance and geometric metrics on ScanNet and ARKitScenes datasets, even though GenRC is not trained on these datasets nor using predefined camera trajectories.

Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation

no code implementations18 Jun 2024 Ning-Hsu Wang, Yu-Lun Liu

Our approach uses state-of-the-art perspective depth estimation models as teacher models to generate pseudo labels through a six-face cube projection technique, enabling efficient labeling of depth in 360-degree images.

Autonomous Navigation Data Augmentation +2

NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video Editing

1 code implementation10 Jun 2024 Ting-Hsuan Chen, Jiewen Chan, Hau-Shiang Shiu, Shih-Han Yen, Chang-Han Yeh, Yu-Lun Liu

We propose a video editing framework, NaRCan, which integrates a hybrid deformation field and diffusion prior to generate high-quality natural canonical images to represent the input video.

Scheduling Video Temporal Consistency

DeNVeR: Deformable Neural Vessel Representations for Unsupervised Video Vessel Segmentation

no code implementations3 Jun 2024 Chun-Hung Wu, Shih-Hong Chen, Chih-Yao Hu, Hsin-Yu Wu, Kai-Hsin Chen, Yu-You Chen, Chih-Hai Su, Chih-Kuo Lee, Yu-Lun Liu

This paper presents Deformable Neural Vessel Representations (DeNVeR), an unsupervised approach for vessel segmentation in X-ray angiography videos without annotated ground truth.

Optical Flow Estimation Segmentation

Image-Text Co-Decomposition for Text-Supervised Semantic Segmentation

1 code implementation CVPR 2024 Ji-Jia Wu, Andy Chia-Hao Chang, Chieh-Yu Chuang, Chun-Pei Chen, Yu-Lun Liu, Min-Hung Chen, Hou-Ning Hu, Yung-Yu Chuang, Yen-Yu Lin

This paper addresses text-supervised semantic segmentation, aiming to learn a model capable of segmenting arbitrary visual concepts within images by using only image-text pairs without dense annotations.

Contrastive Learning Language Modeling +5

Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields

1 code implementation20 Feb 2024 Bo-Yu Cheng, Wei-Chen Chiu, Yu-Lun Liu

In this paper, we propose an algorithm that allows joint refinement of camera pose and scene geometry represented by decomposed low-rank tensor, using only 2D images as supervision.

Novel View Synthesis

Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction

1 code implementation ICCV 2023 Su-Kai Chen, Hung-Lin Yen, Yu-Lun Liu, Min-Hung Chen, Hou-Ning Hu, Wen-Hsiao Peng, Yen-Yu Lin

To address this, we propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs, including those unseen during training.

Deep Learning HDR Reconstruction +1

ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection

1 code implementation ICCV 2023 Tao Tu, Shun-Po Chuang, Yu-Lun Liu, Cheng Sun, Ke Zhang, Donna Roy, Cheng-Hao Kuo, Min Sun

The results demonstrate that ImGeoNet outperforms the current state-of-the-art multi-view image-based method, ImVoxelNet, on all three datasets in terms of detection accuracy.

3D Object Detection object-detection

Dual Associated Encoder for Face Restoration

1 code implementation14 Aug 2023 Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin C. K. Chan, Ming-Hsuan Yang

Restoring facial details from low-quality (LQ) images has remained a challenging problem due to its ill-posedness induced by various degradations in the wild.

Blind Face Restoration

DisCO: Portrait Distortion Correction with Perspective-Aware 3D GANs

no code implementations23 Feb 2023 Zhixiang Wang, Yu-Lun Liu, Jia-Bin Huang, Shin'ichi Satoh, Sizhuo Ma, Gurunandan Krishnan, Jian Wang

Close-up facial images captured at short distances often suffer from perspective distortion, resulting in exaggerated facial features and unnatural/unattractive appearances.

Scheduling

Robust Dynamic Radiance Fields

1 code implementation CVPR 2023 Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang

Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.

Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision

1 code implementation ICCV 2021 Ning-Hsu Wang, Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Yu-Lin Chang, Chia-Ping Chen, Kevin Jou

In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack).

All Depth Estimation

Hybrid Neural Fusion for Full-frame Video Stabilization

2 code implementations ICCV 2021 Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang

Existing video stabilization methods often generate visible distortion or require aggressive cropping of frame boundaries, resulting in smaller field of views.

Video Stabilization

Explorable Tone Mapping Operators

no code implementations20 Oct 2020 Chien-Chuan Su, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Soo-Chang Pei

It aims to preserve visual information of HDR images in a medium with a limited dynamic range.

Diversity Tone Mapping

Learning Camera-Aware Noise Models

1 code implementation ECCV 2020 Ke-Chi Chang, Ren Wang, Hung-Jin Lin, Yu-Lun Liu, Chia-Ping Chen, Yu-Lin Chang, Hwann-Tzong Chen

Modeling imaging sensor noise is a fundamental problem for image processing and computer vision applications.

Noise Estimation

Learning to See Through Obstructions with Layered Decomposition

1 code implementation11 Aug 2020 Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions, or adherent raindrops, from a short sequence of images captured by a moving camera.

Optical Flow Estimation

Learning to See Through Obstructions

1 code implementation CVPR 2020 Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, Jia-Bin Huang

We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera.

Optical Flow Estimation Reflection Removal

Cannot find the paper you are looking for? You can Submit a new open access paper.