Search Results for author: Biao Qian

Found 7 papers, 5 papers with code

Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting

1 code implementation29 Mar 2024 Haipeng Liu, Yang Wang, Biao Qian, Meng Wang, Yong Rui

Denoising diffusion probabilistic models for image inpainting aim to add the noise to the texture of image during the forward process and recover masked regions with unmasked ones of the texture via the reverse denoising process. Despite the meaningful semantics generation, the existing arts suffer from the semantic discrepancy between masked and unmasked regions, since the semantically dense unmasked texture fails to be completely degraded while the masked regions turn to the pure noise in diffusion process, leading to the large discrepancy between them.

Denoising Image Inpainting

Adaptive Data-Free Quantization

1 code implementation CVPR 2023 Biao Qian, Yang Wang, Richang Hong, Meng Wang

Data-free quantization (DFQ) recovers the performance of quantized network (Q) without the original data, but generates the fake sample via a generator (G) by learning from full-precision network (P), which, however, is totally independent of Q, overlooking the adaptability of the knowledge from generated samples, i. e., informative or not to the learning process of Q, resulting into the overflow of generalization error.

Data Free Quantization

Rethinking Data-Free Quantization as a Zero-Sum Game

1 code implementation19 Feb 2023 Biao Qian, Yang Wang, Richang Hong, Meng Wang

how to generate the samples with desirable adaptability to benefit the quantized network?

Data Free Quantization

Fine-grained Cross-modal Fusion based Refinement for Text-to-Image Synthesis

1 code implementation17 Feb 2023 Haoran Sun, Yang Wang, Haipeng Liu, Biao Qian

The proposed FF-Block integrates an attention block and several convolution layers to effectively fuse the fine-grained word-context features into the corresponding visual features, in which the text information is fully used to refine the initial image with more details.

Image Generation

Switchable Online Knowledge Distillation

1 code implementation12 Sep 2022 Biao Qian, Yang Wang, Hongzhi Yin, Richang Hong, Meng Wang

Instead of focusing on the accuracy gap at test phase by the existing arts, the core idea of SwitOKD is to adaptively calibrate the gap at training phase, namely distillation gap, via a switching strategy between two modes -- expert mode (pause the teacher while keep the student learning) and learning mode (restart the teacher).

Knowledge Distillation

Diversifying Inference Path Selection: Moving-Mobile-Network for Landmark Recognition

no code implementations1 Dec 2019 Biao Qian, Yang Wang, Zhao Zhang, Richang Hong, Meng Wang, Ling Shao

We intuitively find that M$^2$Net can essentially promote the diversity of the inference path (selected blocks subset) selection, so as to enhance the recognition accuracy.

Landmark Recognition

A Targeted Acceleration and Compression Framework for Low bit Neural Networks

no code implementations9 Jul 2019 Biao Qian, Yang Wang

In this paper, we propose a novel Targeted Acceleration and Compression (TAC) framework to improve the performance of 1 bit deep neural networks W e consider that the acceleration and compression effects of binarizing fully connected layer s are not sufficient to compensate for the accuracy loss caused by it In the proposed framework, t he convolutional and fully connected layer are separated and optimized i ndividually .

Binarization Computational Efficiency +2

Cannot find the paper you are looking for? You can Submit a new open access paper.