Search Results for author: Haolin Wang

Found 8 papers, 4 papers with code

Learning Flow-based Feature Warping for Face Frontalization with Illumination Inconsistent Supervision

1 code implementation ECCV 2020 Yuxiang Wei, Ming Liu, Haolin Wang, Ruifeng Zhu, Guosheng Hu, WangMeng Zuo

Despite recent advances in deep learning-based face frontalization methods, photo-realistic and illumination preserving frontal face synthesis is still challenging due to large pose and illumination discrepancy during training.

Face Generation

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision

1 code implementation ICCV 2021 Zhilu Zhang, Haolin Wang, Ming Liu, Ruohao Wang, Jiawei Zhang, WangMeng Zuo

To diminish the effect of color inconsistency in image alignment, we introduce to use a global color mapping (GCM) module to generate an initial sRGB image given the input raw image, which can keep the spatial location of the pixels unchanged, and the target sRGB image is utilized to guide GCM for converting the color towards it.

Optical Flow Estimation

Learning Diverse Tone Styles for Image Retouching

1 code implementation12 Jul 2022 Haolin Wang, Jiawei Zhang, Ming Liu, Xiaohe Wu, WangMeng Zuo

In particular, the style encoder predicts the target style representation of an input image, which serves as the conditional information in the RetouchNet for retouching, while the TSFlow maps the style representation vector into a Gaussian distribution in the forward pass.

Image Retouching

Invertible Network for Unpaired Low-light Image Enhancement

no code implementations24 Dec 2021 Jize Zhang, Haolin Wang, Xiaohe Wu, WangMeng Zuo

Existing unpaired low-light image enhancement approaches prefer to employ the two-way GAN framework, in which two CNN generators are deployed for enhancement and degradation separately.

Low-Light Image Enhancement

Unlocking the Potential of Federated Learning for Deeper Models

no code implementations5 Jun 2023 Haolin Wang, Xuefeng Liu, Jianwei Niu, Shaojie Tang, Jiaxing Shen

Our further investigation shows that the decline is due to the continuous accumulation of dissimilarities among client models during the layer-by-layer back-propagation process, which we refer to as "divergence accumulation."

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.