Search Results for author: Chaofeng Chen

Found 33 papers, 24 papers with code

TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment

1 code implementation6 Aug 2023 Chaofeng Chen, Jiadi Mo, Jingwen Hou, HaoNing Wu, Liang Liao, Wenxiu Sun, Qiong Yan, Weisi Lin

Our approach to IQA involves the design of a heuristic coarse-to-fine network (CFANet) that leverages multi-scale features and progressively propagates multi-level semantic information to low-level representations in a top-down manner.

Image Quality Assessment Local Distortion +2

Blind Face Restoration via Deep Multi-scale Component Dictionaries

1 code implementation ECCV 2020 Xiaoming Li, Chaofeng Chen, Shangchen Zhou, Xianhui Lin, WangMeng Zuo, Lei Zhang

Next, with the degraded input, we match and select the most similar component features from their corresponding dictionaries and transfer the high-quality details to the input via the proposed dictionary feature transfer (DFT) block.

Blind Face Restoration Video Super-Resolution

Progressive Semantic-Aware Style Transformation for Blind Face Restoration

1 code implementation CVPR 2021 Chaofeng Chen, Xiaoming Li, Lingbo Yang, Xianhui Lin, Lei Zhang, Kwan-Yee K. Wong

Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs.

Blind Face Restoration Face Parsing +2

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

4 code implementations6 Jul 2022 HaoNing Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.

Ranked #3 on Video Quality Assessment on LIVE-VQC (using extra training data)

Video Quality Assessment

Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment

4 code implementations11 Oct 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Jinwei Gu, Weisi Lin

On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.

Ranked #2 on Video Quality Assessment on KoNViD-1k (using extra training data)

Video Quality Assessment Visual Question Answering (VQA)

Learning Spatial Attention for Face Super-Resolution

1 code implementation2 Dec 2020 Chaofeng Chen, Dihong Gong, Hao Wang, Zhifeng Li, Kwan-Yee K. Wong

Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e. g., $16\times16$).

Face Parsing Image Super-Resolution +2

Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors

2 code implementations26 Feb 2022 Chaofeng Chen, Xinyu Shi, Yipeng Qin, Xiaoming Li, Xiaoguang Han, Tao Yang, Shihui Guo

Unlike image-space methods, our FeMaSR restores HR images by matching distorted LR image {\it features} to their distortion-free HR counterparts in our pretrained HR priors, and decoding the matched features to obtain realistic HR images.

Blind Super-Resolution Generative Adversarial Network +2

Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision

1 code implementation25 Sep 2023 HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin

To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment.

Image Quality Assessment

Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

1 code implementation12 Nov 2023 HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, Geng Xue, Wenxiu Sun, Qiong Yan, Weisi Lin

Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model.

From Face to Natural Image: Learning Real Degradation for Blind Image Super-Resolution

1 code implementation3 Oct 2022 Xiaoming Li, Chaofeng Chen, Xianhui Lin, WangMeng Zuo, Lei Zhang

Notably, LQ face images, which may have the same degradation process as natural images, can be robustly restored with photo-realistic textures by exploiting their strong structural priors.

Image Generation Image Super-Resolution

Semi-Supervised Learning for Face Sketch Synthesis in the Wild

1 code implementation12 Dec 2018 Chaofeng Chen, Wei Liu, Xiao Tan, Kwan-Yee K. Wong

Instead of supervising the network with ground truth sketches, we first perform patch matching in feature space between the input photo and photos in a small reference set of photo-sketch pairs.

Face Sketch Synthesis Patch Matching

Face Sketch Synthesis with Style Transfer using Pyramid Column Feature

1 code implementation18 Sep 2020 Chaofeng Chen, Xiao Tan, Kwan-Yee K. Wong

We utilize a fully convolutional neural network (FCNN) to create the content image, and propose a style transfer approach to introduce textures and shadings based on a newly proposed pyramid column feature.

Face Sketch Synthesis Style Transfer

Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach

1 code implementation22 May 2023 HaoNing Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e. g. sharpness of a video).

Video Quality Assessment Visual Question Answering (VQA)

Iterative Token Evaluation and Refinement for Real-World Super-Resolution

1 code implementation9 Dec 2023 Chaofeng Chen, Shangchen Zhou, Liang Liao, HaoNing Wu, Wenxiu Sun, Qiong Yan, Weisi Lin

Distortion removal involves simple HQ token prediction with LQ images, while texture generation uses a discrete diffusion model to iteratively refine the distortion removal output with a token refinement network.

Image Super-Resolution Texture Synthesis

Enhancing Diffusion Models with Text-Encoder Reinforcement Learning

1 code implementation27 Nov 2023 Chaofeng Chen, Annan Wang, HaoNing Wu, Liang Liao, Wenxiu Sun, Qiong Yan, Weisi Lin

While fine-tuning the U-Net can partially improve performance, it remains suffering from the suboptimal text encoder.

reinforcement-learning

A Unified Framework for Masked and Mask-Free Face Recognition via Feature Rectification

1 code implementation15 Feb 2022 Shaozhe Hao, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong

We introduce rectification blocks to rectify features extracted by a state-of-the-art recognition model, in both spatial and channel dimensions, to minimize the distance between a masked face and its mask-free counterpart in the rectified feature space.

Face Recognition

Exploring Opinion-unaware Video Quality Assessment with Semantic Affinity Criterion

2 code implementations26 Feb 2023 HaoNing Wu, Liang Liao, Jingwen Hou, Chaofeng Chen, Erli Zhang, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Recent learning-based video quality assessment (VQA) algorithms are expensive to implement due to the cost of data collection of human quality opinions, and are less robust across various scenarios due to the biases of these opinions.

Video Quality Assessment Visual Question Answering (VQA)

Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video Quality Assessment

2 code implementations28 Apr 2023 HaoNing Wu, Liang Liao, Annan Wang, Chaofeng Chen, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

The proliferation of videos collected during in-the-wild natural settings has pushed the development of effective Video Quality Assessment (VQA) methodologies.

Video Quality Assessment Visual Question Answering (VQA)

MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction

1 code implementation9 Dec 2022 Shuliang Ning, Mengcheng Lan, Yanran Li, Chaofeng Chen, Qian Chen, Xunlai Chen, Xiaoguang Han, Shuguang Cui

The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner.

Video Prediction

DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment

1 code implementation20 Jun 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues.

Time Series Analysis Video Quality Assessment +1

Exploring the Effectiveness of Video Perceptual Representation in Blind Video Quality Assessment

1 code implementation8 Jul 2022 Liang Liao, Kangmin Xu, HaoNing Wu, Chaofeng Chen, Wenxiu Sun, Qiong Yan, Weisi Lin

Experiments show that the perceptual representation in the HVS is an effective way of predicting subjective temporal quality, and thus TPQI can, for the first time, achieve comparable performance to the spatial quality metric and be even more effective in assessing videos with large temporal variations.

Video Quality Assessment Visual Question Answering (VQA)

SAFE: Scale Aware Feature Encoder for Scene Text Recognition

no code implementations17 Jan 2019 Wei Liu, Chaofeng Chen, Kwan-Yee K. Wong

We propose a novel scale aware feature encoder (SAFE) that is designed specifically for encoding characters with different scales.

Scene Text Recognition

Char-Net: A Character-Aware Neural Network for Distorted Scene Text Recognition

no code implementations AAAI 2018 Wei Liu, Chaofeng Chen, Kwan-Yee K. Wong

Unlike previous work which employed a global spatial transformer network to rectify the entire distorted text image, we take an approach of detecting and rectifying individual characters.

Scene Text Recognition

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

no code implementations23 Jul 2022 Wenqi Yang, GuanYing Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong

It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer.

Inverse Rendering Neural Rendering

S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint

no code implementations17 Oct 2022 Wenqi Yang, GuanYing Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong

Different from existing single-view methods which can only recover a 2. 5D scene representation (i. e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.

Novel View Synthesis

Semi-supervised Cycle-GAN for face photo-sketch translation in the wild

no code implementations18 Jul 2023 Chaofeng Chen, Wei Liu, Xiao Tan, Kwan-Yee K. Wong

Experiments show that SCG achieves competitive performance on public benchmarks and superior results on photos in the wild.

Translation

Local Distortion Aware Efficient Transformer Adaptation for Image Quality Assessment

no code implementations23 Aug 2023 Kangmin Xu, Liang Liao, Jing Xiao, Chaofeng Chen, HaoNing Wu, Qiong Yan, Weisi Lin

Further, we propose a local distortion extractor to obtain local distortion features from the pretrained CNN and a local distortion injector to inject the local distortion features into ViT.

Image Quality Assessment Inductive Bias +1

Towards Open-ended Visual Quality Comparison

no code implementations26 Feb 2024 HaoNing Wu, Hanwei Zhu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Annan Wang, Wenxiu Sun, Qiong Yan, Xiaohong Liu, Guangtao Zhai, Shiqi Wang, Weisi Lin

Comparative settings (e. g. pairwise choice, listwise ranking) have been adopted by a wide range of subjective studies for image quality assessment (IQA), as it inherently standardizes the evaluation criteria across different observers and offer more clear-cut responses.

Image Quality Assessment

Cannot find the paper you are looking for? You can Submit a new open access paper.