no code implementations • 29 Dec 2021 • Hitika Tiwari, Min-Hung Chen, Yi-Min Tsai, Hsien-Kai Kuo, Hung-Jen Chen, Kevin Jou, K. S. Venkatesh, Yong-Sheng Chen
On the three variations of the test dataset of CelebA: rational occlusions, delusional occlusions, and noisy face images, our method outperforms the current state-of-the-art method by large margins (e. g., for the shape-based 3D vertex errors, a reduction from 0. 146 to 0. 048 for rational occlusions, from 0. 292 to 0. 061 for delusional occlusions and from 0. 269 to 0. 053 for the noise in the face images), demonstrating the effectiveness of the proposed approach.
1 code implementation • 17 May 2021 • Andrey Ignatov, Cheng-Ming Chiang, Hsien-Kai Kuo, Anastasia Sycheva, Radu Timofte, Min-Hung Chen, Man-Yu Lee, Yu-Syuan Xu, Yu Tseng, Shusong Xu, Jin Guo, Chao-Hung Chen, Ming-Chun Hsyu, Wen-Chia Tsai, Chao-Wei Chen, Grigory Malivenko, Minsu Kwon, Myungje Lee, Jaeyoon Yoo, Changbeom Kang, Shinjo Wang, Zheng Shaolong, Hao Dejun, Xie Fen, Feng Zhuang, Yipeng Ma, Jingyang Peng, Tao Wang, Fenglong Song, Chih-Chung Hsu, Kwan-Lin Chen, Mei-Hsuang Wu, Vishal Chudasama, Kalpesh Prajapati, Heena Patel, Anjali Sarvaiya, Kishor Upla, Kiran Raja, Raghavendra Ramachandra, Christoph Busch, Etienne de Stoutz
As the quality of mobile cameras starts to play a crucial role in modern smartphones, more and more attention is now being paid to ISP algorithms used to improve various perceptual aspects of mobile photos.
no code implementations • 22 Apr 2021 • Min-Fong Hong, Hao-Yun Chen, Min-Hung Chen, Yu-Syuan Xu, Hsien-Kai Kuo, Yi-Min Tsai, Hung-Jen Chen, Kevin Jou
We propose an NSS method to directly search for efficient-aware network spaces automatically, reducing the manual effort and immense cost in discovering satisfactory ones.
no code implementations • 15 Apr 2021 • Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib
The main progress for action segmentation comes from densely-annotated data for fully-supervised learning.
Ranked #6 on
Action Segmentation
on Breakfast
1 code implementation • CVPR 2020 • Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib, Zsolt Kira
Despite the recent progress of fully-supervised action segmentation techniques, the performance is still not fully satisfactory.
Ranked #6 on
Action Segmentation
on GTEA
no code implementations • 6 Nov 2019 • Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, Chao-Han Huck Yang, Jesper Tegner, Yi-Chang James Tsai
Performing driving behaviors based on causal reasoning is essential to ensure driving safety.
2 code implementations • 29 Aug 2019 • Dogancan Temel, Min-Hung Chen, Ghassan AlRegib
We investigate the effect of challenging conditions through spectral analysis and show that challenging conditions can lead to distinct magnitude spectrum characteristics.
5 code implementations • ICCV 2019 • Min-Hung Chen, Zsolt Kira, Ghassan AlRegib, Jaekwon Yoo, Ruxin Chen, Jian Zheng
Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e. g. 7. 9% accuracy gain over "Source only" from 73. 9% to 81. 8% on "HMDB --> UCF", and 10. 3% gain on "Kinetics --> Gameplay").
Ranked #1 on
Domain Adaptation
on UCF --> HMDB (full)
no code implementations • 16 Jun 2019 • Jian Zheng, Sudha Krishnamurthy, Ruxin Chen, Min-Hung Chen, Zhenhao Ge, Xiaohua LI
However, little work has been done for game image captioning which has some unique characteristics and requirements.
5 code implementations • 26 May 2019 • Min-Hung Chen, Zsolt Kira, Ghassan AlRegib
Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on three video DA datasets.
Ranked #1 on
Domain Adaptation
on UCF-to-Olympic
2 code implementations • 19 Feb 2019 • Dogancan Temel, Tariq Alshawi, Min-Hung Chen, Ghassan AlRegib
Experimental results show that benchmarked algorithms are highly sensitive to tested challenging conditions, which result in an average performance drop of 0. 17 in terms of precision and a performance drop of 0. 28 in recall under severe conditions.
4 code implementations • 30 Mar 2017 • Chih-Yao Ma, Min-Hung Chen, Zsolt Kira, Ghassan AlRegib
We demonstrate that using both RNNs (using LSTMs) and Temporal-ConvNets on spatiotemporal feature matrices are able to exploit spatiotemporal dynamics to improve the overall performance.
Ranked #50 on
Action Recognition
on UCF101
no code implementations • CVPR 2014 • Yen-Yu Lin, Ju-Hsuan Hua, Nick C. Tang, Min-Hung Chen, Hong-Yuan Mark Liao
Our approach aims to enhance action recognition in RGB videos by leveraging the extra database.