Search Results for author: Jingwen Hou

Found 11 papers, 9 papers with code

Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

1 code implementation12 Nov 2023 HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, Geng Xue, Wenxiu Sun, Qiong Yan, Weisi Lin

Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model.

TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment

1 code implementation6 Aug 2023 Chaofeng Chen, Jiadi Mo, Jingwen Hou, HaoNing Wu, Liang Liao, Wenxiu Sun, Qiong Yan, Weisi Lin

Our approach to IQA involves the design of a heuristic coarse-to-fine network (CFANet) that leverages multi-scale features and progressively propagates multi-level semantic information to low-level representations in a top-down manner.

Image Quality Assessment Local Distortion +2

Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach

1 code implementation22 May 2023 HaoNing Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jingwen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e. g. sharpness of a video).

Video Quality Assessment Visual Question Answering (VQA)

Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video Quality Assessment

2 code implementations28 Apr 2023 HaoNing Wu, Liang Liao, Annan Wang, Chaofeng Chen, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

The proliferation of videos collected during in-the-wild natural settings has pushed the development of effective Video Quality Assessment (VQA) methodologies.

Video Quality Assessment Visual Question Answering (VQA)

Exploring Opinion-unaware Video Quality Assessment with Semantic Affinity Criterion

2 code implementations26 Feb 2023 HaoNing Wu, Liang Liao, Jingwen Hou, Chaofeng Chen, Erli Zhang, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Recent learning-based video quality assessment (VQA) algorithms are expensive to implement due to the cost of data collection of human quality opinions, and are less robust across various scenarios due to the biases of these opinions.

Video Quality Assessment Visual Question Answering (VQA)

Neighbourhood Representative Sampling for Efficient End-to-end Video Quality Assessment

4 code implementations11 Oct 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Jinwei Gu, Weisi Lin

On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.

Ranked #2 on Video Quality Assessment on KoNViD-1k (using extra training data)

Video Quality Assessment Visual Question Answering (VQA)

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

4 code implementations6 Jul 2022 HaoNing Wu, Chaofeng Chen, Jingwen Hou, Liang Liao, Annan Wang, Wenxiu Sun, Qiong Yan, Weisi Lin

Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.

Ranked #3 on Video Quality Assessment on LIVE-VQC (using extra training data)

Video Quality Assessment

DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment

1 code implementation20 Jun 2022 HaoNing Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Weisi Lin

Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues.

Time Series Analysis Video Quality Assessment +1

Distilling Knowledge from Object Classification to Aesthetics Assessment

no code implementations2 Jun 2022 Jingwen Hou, Henghui Ding, Weisi Lin, Weide Liu, Yuming Fang

To deal with this dilemma, we propose to distill knowledge on semantic patterns for a vast variety of image contents from multiple pre-trained object classification (POC) models to an IAA model.

Classification Object

Cannot find the paper you are looking for? You can Submit a new open access paper.