no code implementations • 3 Jul 2022 • Zhangkai Ni, Wenhan Yang, Hanli Wang, Shiqi Wang, Lin Ma, Sam Kwong
Getting rid of the fundamental limitations in fitting to the paired training data, recent unsupervised low-light enhancement methods excel in adjusting illumination and contrast of images.
no code implementations • 29 May 2022 • Wending Yan, Lu Xu, Wenhan Yang, Robby T. Tan
Our single image module employs a raindrop removal network to generate initial raindrop removal results, and create a mask representing the differences between the input and initial output.
1 code implementation • 4 May 2022 • Yongzhen Wang, Xuefeng Yan, Fu Lee Wang, Haoran Xie, Wenhan Yang, Mingqiang Wei, Jing Qin
From a different yet new perspective, this paper explores contrastive learning with an adversarial training effort to leverage unpaired real-world hazy and clean images, thus bridging the gap between synthetic and real-world haze is avoided.
1 code implementation • 6 Apr 2022 • Yiyang Shen, Sen Deng, Wenhan Yang, Mingqiang Wei, Haoran Xie, XiaoPing Zhang, Jing Qin, Meng Wang
Such degradation is further exacerbated when applying the models trained on synthetic data to real-world rainy images.
1 code implementation • CVPR 2022 • Yi Yu, Wenhan Yang, Yap-Peng Tan, Alex C. Kot
Finally, we examine various types of adversarial attacks that are specific to deraining problems and their effects on both human and machine vision tasks, including 1) rain region attacks, adding perturbations only in the rain regions to make the perturbations in the attacked rain images less visible; 2) object-sensitive attacks, adding perturbations only in regions near the given objects.
no code implementations • CVPR 2022 • Dezhao Wang, Wenhan Yang, Yueyu Hu, Jiaying Liu
Learned image compression has achieved great success due to its excellent modeling capacity, but seldom further considers the Rate-Distortion Optimization (RDO) of each input image.
1 code implementation • CVPR 2022 • Shiming Chen, Ziming Hong, Guo-Sen Xie, Wenhan Yang, Qinmu Peng, Kai Wang, Jian Zhao, Xinge You
Prior works either simply align the global features of an image with its associated class semantic vector or utilize unidirectional attention to learn the limited latent semantic representations, which could not effectively discover the intrinsic semantic knowledge e. g., attribute semantics) between visual and attribute features.
no code implementations • 20 Feb 2022 • Baoliang Chen, Lingyu Zhu, Hanwei Zhu, Wenhan Yang, Fangbo Lu, Shiqi Wang
In particular, we create a large-scale database for QUality assessment Of The Enhanced LOw-Light Image (QUOTE-LOL), which serves as the foundation in studying and developing objective quality assessment measures.
no code implementations • 10 Jan 2022 • Lanqing Guo, Renjie Wan, Wenhan Yang, Alex Kot, Bihan Wen
Images captured in the low-light condition suffer from low visibility and various imaging artifacts, e. g., real noise.
1 code implementation • CVPR 2022 • Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, Jianmin Jiang
Retinex model-based methods have shown to be effective in layer-wise manipulation with well-designed priors for low-light image enhancement.
no code implementations • 28 Dec 2021 • Haofeng Huang, Wenhan Yang, Yueyu Hu, Jiaying Liu, Ling-Yu Duan
In this paper, we make the first benchmark effort to elaborate on the superiority of using RAW images in the low light enhancement and develop a novel alternative route to utilize RAW images in a more flexible and practical way.
1 code implementation • 13 Dec 2021 • Dong Liang, Ling Li, Mingqiang Wei, Shuo Yang, Liyan Zhang, Wenhan Yang, Yun Du, Huiyu Zhou
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images.
no code implementations • 18 Oct 2021 • Wenhan Yang, Haofeng Huang, Yueyu Hu, Ling-Yu Duan, Jiaying Liu
By keeping in mind the transferability among different machine vision tasks (e. g. high-level semantic and mid-level geometry-related), we aim to support multiple tasks jointly at low bit rates.
1 code implementation • 13 Sep 2021 • YuFei Wang, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, Alex C. Kot
To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many.
Ranked #2 on
Low-Light Image Enhancement
on LOL
1 code implementation • CVPR 2021 • Wending Yan, Robby T. Tan, Wenhan Yang, Dengxin Dai
In this paper, we address the problems of rain streaks and rain accumulation removal in video, by developing a self-aligned network with transmission-depth consistency.
no code implementations • 16 Jun 2021 • Yueyu Hu, Wenhan Yang, Haofeng Huang, Jiaying Liu
Visual analytics have played an increasingly critical role in the Internet of Things, where massive visual signals have to be compressed and fed into machines.
1 code implementation • CVPR 2021 • Wenjing Wang, Wenhan Yang, Jiaying Liu
To reduce the burden of building new datasets for low light conditions, we make full use of existing normal light data and explore how to adapt face detectors from normal light to low light.
no code implementations • 25 Jan 2021 • Baoliang Chen, Wenhan Yang, Haoliang Li, Shiqi Wang, Sam Kwong
The first branch aims to learn the camera invariant spoofing features via feature level decomposition in the high frequency domain.
1 code implementation • 30 Dec 2020 • Zhangkai Ni, Wenhan Yang, Shiqi Wang, Lin Ma, Sam Kwong
In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images.
no code implementations • 30 Dec 2020 • Zhangkai Ni, Wenhan Yang, Shiqi Wang, Lin Ma, Sam Kwong
The key novelty of the proposed QAGAN lies in the injected QAM for the generator such that it learns domain-relevant quality attention directly from the two domains.
1 code implementation • CVPR 2020 • Wenhan Yang, Robby T. Tan, Shiqi Wang, Jiaying Liu
With this in mind, we construct a two-stage Self-Learned Deraining Network (SLDNet) to remove rain streaks based on both temporal correlation and consistency.
no code implementations • CVPR 2020 • Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, Jiaying Liu
A deep recursive band network (DRBN) is proposed to recover a linear band representation of an enhanced normal-light image with paired low/normal-light images, and then obtain an improved one by recomposing the given bands via another learnable linear transformation based on a perceptual quality-driven adversarial learning with unpaired data.
no code implementations • 21 Apr 2020 • Shurun Wang, Shiqi Wang, Wenhan Yang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Wen Gao
In particular, we study the feature and texture compression in a scalable coding framework, where the base layer serves as the deep learning feature and enhancement layer targets to perfectly reconstruct the texture.
no code implementations • 10 Feb 2020 • Shurun Wang, Wenhan Yang, Shiqi Wang
In this paper, we propose a novel end-to-end feature compression scheme by leveraging the representation and learning capability of deep neural networks, towards intelligent front-end equipped analysis with promising accuracy and efficiency.
2 code implementations • 10 Feb 2020 • Yueyu Hu, Wenhan Yang, Zhan Ma, Jiaying Liu
In this paper, we first conduct a comprehensive literature survey of learned image compression methods.
no code implementations • 16 Jan 2020 • Dezhao Wang, Sifeng Xia, Wenhan Yang, Jiaying Liu
For (2), we extract both intra-frame and inter-frame side information for better context modeling.
no code implementations • 10 Jan 2020 • Ling-Yu Duan, Jiaying Liu, Wenhan Yang, Tiejun Huang, Wen Gao
Meanwhile, we systematically review state-of-the-art techniques in video compression and feature compression from the unique perspective of MPEG standardization, which provides the academic and industrial evidence to realize the collaborative compression of video and feature streams in a broad range of AI applications.
no code implementations • 9 Jan 2020 • Yueyu Hu, Shuai Yang, Wenhan Yang, Ling-Yu Duan, Jiaying Liu
In this paper, we come up with a novel image coding framework by leveraging both the compressive and the generative models, to support machine vision and human perception tasks jointly.
no code implementations • 9 Jan 2020 • Sifeng Xia, Kunchangtai Liang, Wenhan Yang, Ling-Yu Duan, Jiaying Liu
To this end, we make endeavors in leveraging the strength of predictive and generative models to support advanced compression techniques for both machine and human vision tasks simultaneously, in which visual features serve as a bridge to connect signal-level and task-level compact representations in a scalable manner.
no code implementations • 16 Dec 2019 • Wenhan Yang, Robby T. Tan, Shiqi Wang, Yuming Fang, Jiaying Liu
The goal of single-image deraining is to restore the rain-free background scenes of an image degraded by rain streaks and rain accumulation.
no code implementations • 9 Sep 2019 • Jiaying Liu, Dong Liu, Wenhan Yang, Sifeng Xia, Xiaoshuai Zhang, Yuanying Dai
We present a comprehensive study and evaluation of existing single image compression artifacts removal algorithms, using a new 4K resolution benchmark including diversified foreground objects and background scenes with rich structures, called Large-scale Ideal Ultra high definition 4K (LIU4K) benchmark.
no code implementations • 13 Jun 2019 • Hanshu Yan, Xuan Chen, Vincent Y. F. Tan, Wenhan Yang, Joe Wu, Jiashi Feng
They jointly facilitate unsupervised learning of a noise model for various noise types.
1 code implementation • CVPR 2019 • Wenhan Yang, Jiaying Liu, Jiashi Feng
The proposed framework is built upon a two-stage recurrent network with dual-level flow regularizations to perform the inverse recovery process of the rain synthesis model for video deraining.
no code implementations • 16 May 2019 • Jiaying Liu, Sifeng Xia, Wenhan Yang
In this paper, we address the problem by proposing a deep frame interpolation network to generate additional reference frames in coding scenarios.
no code implementations • 9 Apr 2019 • Ye Yuan, Wenhan Yang, Wenqi Ren, Jiaying Liu, Walter J. Scheirer, Zhangyang Wang
The UG$^{2+}$ challenge in IEEE CVPR 2019 aims to evoke a comprehensive discussion and exploration about how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios.
no code implementations • 28 Jan 2019 • Rosaura G. VidalMata, Sreya Banerjee, Brandon RichardWebster, Michael Albright, Pedro Davalos, Scott McCloskey, Ben Miller, Asong Tambo, Sushobhan Ghosh, Sudarshan Nagesh, Ye Yuan, Yueyu Hu, Junru Wu, Wenhan Yang, Xiaoshuai Zhang, Jiaying Liu, Zhangyang Wang, Hwann-Tzong Chen, Tzu-Wei Huang, Wen-Chi Chin, Yi-Chun Li, Mahmoud Lababidi, Charles Otto, Walter J. Scheirer
From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area.
no code implementations • 9 Oct 2018 • Shuai Yang, Jiaying Liu, Wenhan Yang, Zongming Guo
The stylization is then followed by a context-aware layout design algorithm, where cues for both seamlessness and aesthetics are employed to determine the optimal layout of the shape in the background.
2 code implementations • 14 Aug 2018 • Chen Wei, Wenjing Wang, Wenhan Yang, Jiaying Liu
Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance.
Ranked #5 on
Low-Light Image Enhancement
on DICM
2 code implementations • 6 Jul 2018 • Yueyu Hu, Wenhan Yang, Mading Li, Jiaying Liu
With preceding pixels as the context, traditional intra prediction schemes generate linear predictions based on several predefined directions (i. e. modes) for blocks to be encoded.
no code implementations • 19 Jun 2018 • Sifeng Xia, Wenhan Yang, Yueyu Hu, Siwei Ma, Jiaying Liu
Then a group variational transformation technique is used to transform a group of copied shared feature maps to samples at different sub-pixel positions.
Multimedia
no code implementations • 8 Jun 2018 • Xiaoshuai Zhang, Wenhan Yang, Yueyu Hu, Jiaying Liu
JPEG is one of the most commonly used standards among lossy image compression methods.
no code implementations • CVPR 2018 • Jiaying Liu, Wenhan Yang, Shuai Yang, Zongming Guo
In this paper, we address the problem of video rain removal by constructing deep recurrent convolutional networks.
3 code implementations • CVPR 2018 • Rui Qian, Robby T. Tan, Wenhan Yang, Jiajun Su, Jiaying Liu
This injection of visual attention to both generative and discriminative networks is the main contribution of this paper.
no code implementations • 20 Jan 2017 • Sifeng Xia, Wenhan Yang, Jiaying Liu, Zongming Guo
In particular, we infer the HF information based on both the LR image and similar HR references which are retrieved online.
no code implementations • 27 Dec 2016 • Fang Zhao, Jiashi Feng, Jian Zhao, Wenhan Yang, Shuicheng Yan
The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches.
2 code implementations • CVPR 2017 • Wenhan Yang, Robby T. Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, Shuicheng Yan
Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output.
no code implementations • 13 Jun 2016 • Jiaying Liu, Wenhan Yang, Xiaoyan Sun, Wen-Jun Zeng
With the rapid development of social network and multimedia technology, customized image and video stylization has been widely used for various social-media applications.
no code implementations • 29 Apr 2016 • Wenhan Yang, Jiashi Feng, Jianchao Yang, Fang Zhao, Jiaying Liu, Zongming Guo, Shuicheng Yan
To address this essentially ill-posed problem, we introduce a Deep Edge Guided REcurrent rEsidual~(DEGREE) network to progressively recover the high-frequency details.