Search Results for author: Che-Tsung Lin

Found 5 papers, 1 papers with code

Text in the Dark: Extremely Low-Light Text Image Enhancement

no code implementations22 Apr 2024 Che-Tsung Lin, Chun Chet Ng, Zhi Qin Tan, Wan Jun Nah, Xinyu Wang, Jie Long Kew, PoHao Hsu, Shang Hong Lai, Chee Seng Chan, Christopher Zach

We also labeled texts in the extremely low-light See In the Dark (SID) and ordinary LOw-Light (LOL) datasets to allow for objective assessment of extremely low-light image enhancement through scene text tasks.

Adjustable Visual Appearance for Generalizable Novel View Synthesis

no code implementations2 Jun 2023 Josef Bengtson, David Nilsson, Che-Tsung Lin, Marcel Büsching, Fredrik Kahl

We present a generalizable novel view synthesis method which enables modifying the visual appearance of an observed scene so rendered views match a target weather or lighting condition without any scene specific training or access to reference views at the target condition.

Generalizable Novel View Synthesis Novel View Synthesis +1

Extremely Low-light Image Enhancement with Scene Text Restoration

1 code implementation1 Apr 2022 PoHao Hsu, Che-Tsung Lin, Chun Chet Ng, Jie-Long Kew, Mei Yih Tan, Shang-Hong Lai, Chee Seng Chan, Christopher Zach

Deep learning-based methods have made impressive progress in enhancing extremely low-light images - the image quality of the reconstructed images has generally improved.

Image Restoration Low-Light Image Enhancement +2

AugGAN: Cross Domain Adaptation with GAN-based Data Augmentation

no code implementations ECCV 2018 Sheng-Wei Huang, Che-Tsung Lin, Shu-Ping Chen, Yen-Yi Wu, Po-Hao Hsu, Shang-Hong Lai

Deep learning based image-to-image translation methods aim at learning the joint distribution of the two domains and finding transformations between them.

Data Augmentation Domain Adaptation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.