no code implementations • 9 Mar 2023 • Jie-En Yao, Li-Yuan Tsao, Yi-Chen Lo, Roy Tseng, Chia-Che Chang, Chun-Yi Lee
Flow-based methods have demonstrated promising results in addressing the ill-posed nature of super-resolution (SR) by learning the distribution of high-resolution (HR) images with the normalizing flow.
1 code implementation • 16 Nov 2022 • Ting-Hsuan Liao, Huang-Ru Liao, Shan-Ya Yang, Jie-En Yao, Li-Yuan Tsao, Hsu-Shen Liu, Bo-Wun Cheng, Chen-Hao Chao, Chia-Che Chang, Yi-Chen Lo, Chun-Yi Lee
Despite their effectiveness, using depth as domain invariant information in UDA tasks may lead to multiple issues, such as excessively high extraction costs and difficulties in achieving a reliable prediction quality.
1 code implementation • ICLR 2022 • Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang, Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, Chun-Yi Lee
These methods facilitate the training procedure of conditional score models, as a mixture of scores can be separately estimated using a score model and a classifier.
1 code implementation • CVPR 2021 • Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang, Kevin Jou
In this paper, we present CLCC, a novel contrastive learning framework for color constancy.
no code implementations • ICLR 2019 • Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, Hwann-Tzong Chen
The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch-Inspired Image Generation" enables us to generate the entire image based on a single patch.
1 code implementation • ICCV 2019 • Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, Hwann-Tzong Chen
On the computation side, COCO-GAN has a built-in divide-and-conquer paradigm that reduces memory requisition during training and inference, provides high-parallelism, and can generate parts of images on-demand.
Ranked #1 on Image Generation on CelebA-HQ 64x64
no code implementations • 3 Dec 2018 • Wei-Chun Chen, Chia-Che Chang, Chien-Yu Lu, Che-Rung Lee
One promising method is knowledge distillation (KD), which creates a fast-to-execute student model to mimic a large teacher network.
1 code implementation • ECCV 2018 • Chia-Che Chang, Chieh Hubert Lin, Che-Rung Lee, Da-Cheng Juan, Wei Wei, Hwann-Tzong Chen
Generative adversarial networks (GANs) often suffer from unpredictable mode-collapsing during training.
Ranked #18 on Image Generation on CelebA 64x64