Search Results for author: HaoNing Wu

Found 33 papers, 24 papers with code

AIGIQA-20K: A Large Database for AI-Generated Image Quality Assessment

no code implementations4 Apr 2024 Chunyi Li, Tengchuan Kou, Yixuan Gao, Yuqin Cao, Wei Sun, ZiCheng Zhang, Yingjie Zhou, Zhichao Zhang, Weixia Zhang, HaoNing Wu, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai

With the rapid advancements in AI-Generated Content (AIGC), AI-Generated Images (AIGIs) have been widely applied in entertainment, education, and social media.

Image Quality Assessment

MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model

1 code implementation26 Feb 2024 Chunyi Li, Guo Lu, Donghui Feng, HaoNing Wu, ZiCheng Zhang, Xiaohong Liu, Guangtao Zhai, Weisi Lin, Wenjun Zhang

With the evolution of storage and communication protocols, ultra-low bitrate image compression has become a highly demanding topic.

Image Compression

Towards Open-ended Visual Quality Comparison

no code implementations26 Feb 2024 HaoNing Wu, Hanwei Zhu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Annan Wang, Wenxiu Sun, Qiong Yan, Xiaohong Liu, Guangtao Zhai, Shiqi Wang, Weisi Lin

Comparative settings (e. g. pairwise choice, listwise ranking) have been adopted by a wide range of subjective studies for image quality assessment (IQA), as it inherently standardizes the evaluation criteria across different observers and offer more clear-cut responses.

Image Quality Assessment

A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs

1 code implementation11 Feb 2024 ZiCheng Zhang, HaoNing Wu, Erli Zhang, Guangtao Zhai, Weisi Lin

To this end, we design benchmark settings to emulate human language responses related to low-level vision: the low-level visual perception (A1) via visual question answering related to low-level attributes (e. g. clarity, lighting); and the low-level visual description (A2), on evaluating MLLMs for low-level text descriptions.

Image Quality Assessment Question Answering +1

Q-Refine: A Perceptual Quality Refiner for AI-Generated Image

no code implementations2 Jan 2024 Chunyi Li, HaoNing Wu, ZiCheng Zhang, Hongkun Hao, Kaiwei Zhang, Lei Bai, Xiaohong Liu, Xiongkuo Min, Weisi Lin, Guangtao Zhai

With the rapid evolution of the Text-to-Image (T2I) model in recent years, their unsatisfactory generation result has become a challenge.

Image Quality Assessment

Iterative Token Evaluation and Refinement for Real-World Super-Resolution

1 code implementation9 Dec 2023 Chaofeng Chen, Shangchen Zhou, Liang Liao, HaoNing Wu, Wenxiu Sun, Qiong Yan, Weisi Lin

Distortion removal involves simple HQ token prediction with LQ images, while texture generation uses a discrete diffusion model to iteratively refine the distortion removal output with a token refinement network.

Image Super-Resolution Texture Synthesis

Exploring the Naturalness of AI-Generated Images

1 code implementation9 Dec 2023 Zijian Chen, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhongpeng Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang

In this paper, we take the first step to benchmark and assess the visual naturalness of AI-generated images.

Enhancing Diffusion Models with Text-Encoder Reinforcement Learning

1 code implementation27 Nov 2023 Chaofeng Chen, Annan Wang, HaoNing Wu, Liang Liao, Wenxiu Sun, Qiong Yan, Weisi Lin

While fine-tuning the U-Net can partially improve performance, it remains suffering from the suboptimal text encoder.

reinforcement-learning

Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

1 code implementation12 Nov 2023 HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, Geng Xue, Wenxiu Sun, Qiong Yan, Weisi Lin

Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model.

Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision

1 code implementation25 Sep 2023 HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin

To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment.

Image Quality Assessment

Local Distortion Aware Efficient Transformer Adaptation for Image Quality Assessment

no code implementations23 Aug 2023 Kangmin Xu, Liang Liao, Jing Xiao, Chaofeng Chen, HaoNing Wu, Qiong Yan, Weisi Lin

Further, we propose a local distortion extractor to obtain local distortion features from the pretrained CNN and a local distortion injector to inject the local distortion features into ViT.

Image Quality Assessment Inductive Bias +1

TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment

1 code implementation6 Aug 2023 Chaofeng Chen, Jiadi Mo, Jingwen Hou, HaoNing Wu, Liang Liao, Wenxiu Sun, Qiong Yan, Weisi Lin

Our approach to IQA involves the design of a heuristic coarse-to-fine network (CFANet) that leverages multi-scale features and progressively propagates multi-level semantic information to low-level representations in a top-down manner.

Image Quality Assessment Local Distortion +2

Advancing Zero-Shot Digital Human Quality Assessment through Text-Prompted Evaluation

1 code implementation6 Jul 2023 ZiCheng Zhang, Wei Sun, Yingjie Zhou, HaoNing Wu, Chunyi Li, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin

To address this gap, we propose SJTU-H3D, a subjective quality assessment database specifically designed for full-body digital humans.

Boost Video Frame Interpolation via Motion Adaptation

1 code implementation24 Jun 2023 HaoNing Wu, Xiaoyun Zhang, Weidi Xie, Ya zhang, Yanfeng Wang

Video frame interpolation (VFI) is a challenging task that aims to generate intermediate frames between two consecutive frames in a video.

Motion Estimation Video Frame Interpolation

AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment

1 code implementation7 Jun 2023 Chunyi Li, ZiCheng Zhang, HaoNing Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin

With the rapid advancements of the text-to-image generative model, AI-generated images (AGIs) have been widely applied to entertainment, education, social media, etc.

Image Quality Assessment

Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models

1 code implementation1 Jun 2023 Chang Liu, HaoNing Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie

Generative models have recently exhibited exceptional capabilities in text-to-image generation, but still struggle to generate image sequences coherently.

Story Visualization Style Transfer +2

Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach

1 code implementation22 May 2023 HaoNing Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e. g. sharpness of a video).

Video Quality Assessment Visual Question Answering (VQA)

Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video Quality Assessment

2 code implementations28 Apr 2023 HaoNing Wu, Liang Liao, Annan Wang, Chaofeng Chen, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

The proliferation of videos collected during in-the-wild natural settings has pushed the development of effective Video Quality Assessment (VQA) methodologies.

Video Quality Assessment Visual Question Answering (VQA)

Exploring Opinion-unaware Video Quality Assessment with Semantic Affinity Criterion

2 code implementations26 Feb 2023 HaoNing Wu, Liang Liao, Jingwen Hou, Chaofeng Chen, Erli Zhang, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Recent learning-based video quality assessment (VQA) algorithms are expensive to implement due to the cost of data collection of human quality opinions, and are less robust across various scenarios due to the biases of these opinions.

Video Quality Assessment Visual Question Answering (VQA)

Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment

4 code implementations11 Oct 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Jinwei Gu, Weisi Lin

On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.

Ranked #2 on Video Quality Assessment on KoNViD-1k (using extra training data)

Video Quality Assessment Visual Question Answering (VQA)

Exploring the Effectiveness of Video Perceptual Representation in Blind Video Quality Assessment

1 code implementation8 Jul 2022 Liang Liao, Kangmin Xu, HaoNing Wu, Chaofeng Chen, Wenxiu Sun, Qiong Yan, Weisi Lin

Experiments show that the perceptual representation in the HVS is an effective way of predicting subjective temporal quality, and thus TPQI can, for the first time, achieve comparable performance to the spatial quality metric and be even more effective in assessing videos with large temporal variations.

Video Quality Assessment Visual Question Answering (VQA)

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

4 code implementations6 Jul 2022 HaoNing Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.

Ranked #3 on Video Quality Assessment on LIVE-VQC (using extra training data)

Video Quality Assessment

DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment

1 code implementation20 Jun 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues.

Time Series Analysis Video Quality Assessment +1

LAR-SR: A Local Autoregressive Model for Image Super-Resolution

1 code implementation CVPR 2022 Baisong Guo, Xiaoyun Zhang, HaoNing Wu, Yu Wang, Ya zhang, Yan-Feng Wang

Previous super-resolution (SR) approaches often formulate SR as a regression problem and pixel wise restoration, which leads to a blurry and unreal SR output.

Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.