Search Results for author: Chaofei Wang

Found 11 papers, 6 papers with code

Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models

1 code implementation7 Dec 2023 Jiayi Guo, Xingqian Xu, Yifan Pu, Zanlin Ni, Chaofei Wang, Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi

Specifically, we introduce Step-wise Variation Regularization to enforce the proportion between the variations of an arbitrary input latent and that of the output image is a constant at any diffusion training step.

Latency-aware Unified Dynamic Networks for Efficient Image Recognition

1 code implementation30 Aug 2023 Yizeng Han, Zeyu Liu, Zhihang Yuan, Yifan Pu, Chaofei Wang, Shiji Song, Gao Huang

Dynamic computation has emerged as a promising avenue to enhance the inference efficiency of deep networks.

Scheduling

Computation-efficient Deep Learning for Computer Vision: A Survey

no code implementations27 Aug 2023 Yulin Wang, Yizeng Han, Chaofei Wang, Shiji Song, Qi Tian, Gao Huang

Over the past decade, deep learning models have exhibited considerable advancements, reaching or even exceeding human-level performance in a range of visual perception tasks.

Autonomous Vehicles Edge-computing +1

Zero-shot Generative Model Adaptation via Image-specific Prompt Learning

1 code implementation CVPR 2023 Jiayi Guo, Chaofei Wang, You Wu, Eric Zhang, Kai Wang, Xingqian Xu, Shiji Song, Humphrey Shi, Gao Huang

Recently, CLIP-guided image synthesis has shown appealing performance on adapting a pre-trained source-domain generator to an unseen target domain.

Image Generation

Efficient Knowledge Distillation from Model Checkpoints

1 code implementation12 Oct 2022 Chaofei Wang, Qisen Yang, Rui Huang, Shiji Song, Gao Huang

Knowledge distillation is an effective approach to learn compact models (students) with the supervision of large and strong models (teachers).

Knowledge Distillation

Learning to Weight Samples for Dynamic Early-exiting Networks

1 code implementation17 Sep 2022 Yizeng Han, Yifan Pu, Zihang Lai, Chaofei Wang, Shiji Song, Junfen Cao, Wenhui Huang, Chao Deng, Gao Huang

Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers.

Meta-Learning

Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment

2 code implementations CVPR 2022 Jiayu Xiao, Liang Li, Chaofei Wang, Zheng-Jun Zha, Qingming Huang

A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.

Generative Adversarial Network

Learn From the Past: Experience Ensemble Knowledge Distillation

no code implementations25 Feb 2022 Chaofei Wang, Shaowei Zhang, Shiji Song, Gao Huang

We save a moderate number of intermediate models from the training process of the teacher model uniformly, and then integrate the knowledge of these intermediate models by ensemble technique.

Knowledge Distillation Transfer Learning

Fine-Grained Few Shot Learning with Foreground Object Transformation

no code implementations13 Sep 2021 Chaofei Wang, Shiji Song, Qisen Yang, Xiang Li, Gao Huang

As a data augmentation method, FOT can be conveniently applied to any existing few shot learning algorithm and greatly improve its performance on FG-FSL tasks.

Data Augmentation Few-Shot Learning +2

CAM-loss: Towards Learning Spatially Discriminative Feature Representations

no code implementations ICCV 2021 Chaofei Wang, Jiayu Xiao, Yizeng Han, Qisen Yang, Shiji Song, Gao Huang

The backbone of traditional CNN classifier is generally considered as a feature extractor, followed by a linear layer which performs the classification.

Few-Shot Learning Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.