Search Results for author: Chuanguang Yang

Found 35 papers, 20 papers with code

SepPrune: Structured Pruning for Efficient Deep Speech Separation

1 code implementation17 May 2025 Yuqi Li, Kai Li, Xin Yin, Zhifei Yang, Junhao Dong, Zeyu Dong, Chuanguang Yang, YingLi Tian, Yao Lu

Although deep learning has substantially advanced speech separation in recent years, most existing studies continue to prioritize separation quality while overlooking computational efficiency, an essential factor for low-latency speech processing in real-time applications.

channel selection Computational Efficiency +1

PrePrompt: Predictive prompting for class incremental learning

1 code implementation13 May 2025 Libo Huang, Zhulin An, Chuanguang Yang, Boyu Diao, Fei Wang, Yan Zeng, Zhifeng Hao, Yongjun Xu

Class Incremental Learning (CIL) based on pre-trained models offers a promising direction for open-world continual learning.

Classifier calibration class-incremental learning +3

Efficient Continual Learning through Frequency Decomposition and Integration

no code implementations28 Mar 2025 Ruiqi Liu, Boyu Diao, Libo Huang, Hangda Liu, Chuanguang Yang, Zhulin An, Yongjun Xu

Inspired by this, we propose the Frequency Decomposition and Integration Network (FDINet), a novel framework that decomposes and integrates information across frequencies.

Continual Learning

Enhancing Image Generation Fidelity via Progressive Prompts

1 code implementation13 Jan 2025 Zhen Xiong, Yuqi Li, Chuanguang Yang, Tiao Tan, Zhihong Zhu, Siyuan Li, Yue Ma

We find that deeper layers are always responsible for high - level content control, while shallow layers handle low - level content control.

Diversity Image Generation +3

FedKD-hybrid: Federated Hybrid Knowledge Distillation for Lithography Hotspot Detection

1 code implementation7 Jan 2025 Yuqi Li, Xingyou Lin, Kai Zhang, Chuanguang Yang, Zhongliang Guo, Jianping Gou, Yanli Li

Federated Learning (FL) provides novel solutions for machine learning (ML)-based lithography hotspot detection (LHD) under distributed privacy-preserving settings.

Federated Learning Knowledge Distillation +1

ECG-guided individual identification via PPG

no code implementations30 Dec 2024 Riling Wei, Hanjie Chen, Kelu Yao, Chuanguang Yang, Jun Wang, Chao Li

To this end, electrocardiogram (ECG) signals have been introduced as a novel modality to enhance the density of input information.

Knowledge Distillation

MPQ-DM: Mixed Precision Quantization for Extremely Low Bit Diffusion Models

1 code implementation16 Dec 2024 Weilun Feng, Haotong Qin, Chuanguang Yang, Zhulin An, Libo Huang, Boyu Diao, Fei Wang, Renshuai Tao, Yongjun Xu, Michele Magno

However, the existing quantization methods for diffusion models still cause severe degradation in performance, especially under extremely low bit-widths (2-4 bit).

Quantization

SGLP: A Similarity Guided Fast Layer Partition Pruning for Compressing Large Deep Models

1 code implementation14 Oct 2024 Yuqi Li, Yao Lu, Zeyu Dong, Chuanguang Yang, Yihao Chen, Jianping Gou

Based on similarity matrix derived from CKA, we employ Fisher Optimal Segmentation to partition the network into multiple segments, which provides a basis for removing the layers in a segment-wise manner.

Computational Efficiency image-classification +1

Relational Diffusion Distillation for Efficient Image Generation

1 code implementation10 Oct 2024 Weilun Feng, Chuanguang Yang, Zhulin An, Libo Huang, Boyu Diao, Fei Wang, Yongjun Xu

Therefore, many training-free sampling methods have been proposed to reduce the number of sampling steps required for diffusion models.

Image Generation Knowledge Distillation

Prototype-Driven Multi-Feature Generation for Visible-Infrared Person Re-identification

1 code implementation9 Sep 2024 Jiarui Li, Zhen Qiu, Yilin Yang, Yuqi Li, Zeyu Dong, Chuanguang Yang

The primary challenges in visible-infrared person re-identification arise from the differences between visible (vis) and infrared (ir) images, including inter-modal and intra-modal variations.

Diversity Person Re-Identification

Online Policy Distillation with Decision-Attention

no code implementations8 Jun 2024 Xinqiang Yu, Chuanguang Yang, Chengqing Yu, Libo Huang, Zhulin An, Yongjun Xu

However, the teacher-student framework requires a well-trained teacher model which is computationally expensive. In the light of online knowledge distillation, we study the knowledge transfer between different policies that can learn diverse knowledge from the same environment. In this work, we propose Online Policy Distillation (OPD) with Decision-Attention (DA), an online learning framework in which different policies operate in the same environment to learn different perspectives of the environment and transfer knowledge to each other to obtain better performance together.

Deep Reinforcement Learning Knowledge Distillation +1

Exemplar-Free Class Incremental Learning via Incremental Representation

no code implementations24 Mar 2024 Libo Huang, Zhulin An, Yan Zeng, Chuanguang Yang, Xinqiang Yu, Yongjun Xu

Exemplar-Free Class Incremental Learning (efCIL) aims to continuously incorporate the knowledge from new classes while retaining previously learned information, without storing any old-class exemplars (i. e., samples).

class-incremental learning Class Incremental Learning +2

Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation

no code implementations19 Jun 2023 Chuanguang Yang, Xinqiang Yu, Zhulin An, Yongjun Xu

Knowledge Distillation (KD) aims to optimize a lightweight network from the perspective of over-parameterized training.

Knowledge Distillation Relation

Team AcieLee: Technical Report for EPIC-SOUNDS Audio-Based Interaction Recognition Challenge 2023

no code implementations15 Jun 2023 Yuqi Li, Yizhi Luo, Xiaoshuai Hao, Chuanguang Yang, Zhulin An, Dantong Song, Wei Yi

In this report, we describe the technical details of our submission to the EPIC-SOUNDS Audio-Based Interaction Recognition Challenge 2023, by Team "AcieLee" (username: Yuqi\_Li).

eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation

no code implementations20 Apr 2023 Libo Huang, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, Yongjun Xu

Most successful CIL methods incrementally train a feature extractor with the aid of stored exemplars, or estimate the feature distribution with the stored prototypes.

class-incremental learning Class Incremental Learning +1

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

1 code implementation11 Aug 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang

MixSKD mutually distills feature maps and probability distributions between the random pair of original images and their mixup images in a meaningful way.

Data Augmentation image-classification +6

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

2 code implementations23 Jul 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Fuzhen Zhuang, Yongjun Xu, Qian Zhan

This enables each network to learn extra contrastive knowledge from others, leading to better feature representations, thus improving the performance of visual recognition tasks.

Contrastive Learning image-classification +4

Localizing Semantic Patches for Accelerating Image Classification

1 code implementation7 Jun 2022 Chuanguang Yang, Zhulin An, Yongjun Xu

This ensures the exact mapping from a high-level spatial location to the specific input image patch.

Classification General Classification +2

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

1 code implementation CVPR 2022 Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples.

Knowledge Distillation Segmentation +1

Prior Gradient Mask Guided Pruning-Aware Fine-Tuning

1 code implementation AAAI 2022 Linhang Cai, Zhulin An, Chuanguang Yang, Yangchun Yan, Yongjun Xu

In detail, the proposed PGMPF selectively suppresses the gradient of those ”unimportant” parameters via a prior gradient mask generated by the pruning criterion during fine-tuning.

image-classification Image Classification

Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

1 code implementation7 Sep 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

Each auxiliary branch is guided to learn self-supervision augmented task and distill this distribution from teacher to student.

image-classification Image Classification +4

Hierarchical Self-supervised Augmented Knowledge Distillation

1 code implementation29 Jul 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We therefore adopt an alternative self-supervised augmented task to guide the network to learn the joint distribution of the original recognition task and self-supervised auxiliary task.

Knowledge Distillation Representation Learning

Mutual Contrastive Learning for Visual Representation Learning

1 code implementation26 Apr 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning.

Contrastive Learning Few-Shot Learning +6

Learning Heatmap-Style Jigsaw Puzzles Provides Good Pretraining for 2D Human Pose Estimation

no code implementations13 Dec 2020 Kun Zhang, Rui Wu, Ping Yao, Kai Deng, Ding Li, Renbiao Liu, Chuanguang Yang, Ge Chen, Min Du, Tianyao Zheng

We note that 2D pose estimation task is highly dependent on the contextual relationship between image patches, thus we introduce a self-supervised method for pretraining 2D pose estimation networks.

2D Human Pose Estimation 2D Pose Estimation +1

Softer Pruning, Incremental Regularization

no code implementations19 Oct 2020 Linhang Cai, Zhulin An, Chuanguang Yang, Yongjun Xu

Network pruning is widely used to compress Deep Neural Networks (DNNs).

Network Pruning

Multi-view Contrastive Learning for Online Knowledge Distillation

1 code implementation7 Jun 2020 Chuanguang Yang, Zhulin An, Yongjun Xu

Previous Online Knowledge Distillation (OKD) often carries out mutually exchanging probability distributions, but neglects the useful representational knowledge.

Classification Contrastive Learning +5

Localizing Interpretable Multi-scale informative Patches Derived from Media Classification Task

no code implementations31 Jan 2020 Chuanguang Yang, Zhulin An, Xiaolong Hu, Hui Zhu, Yongjun Xu

Deep convolutional neural networks (CNN) always depend on wider receptive field (RF) and more complex non-linearity to achieve state-of-the-art performance, while suffering the increased difficult to interpret how relevant patches contribute the final prediction.

General Classification Image Classification

DRNet: Dissect and Reconstruct the Convolutional Neural Network via Interpretable Manners

no code implementations20 Nov 2019 Xiaolong Hu, Zhulin An, Chuanguang Yang, Hui Zhu, Kaiqaing Xu, Yongjun Xu

For VGG16 pre-trained on ImageNet, our method averagely gains 14. 29\% accuracy promotion for two-classes sub-tasks.

Rethinking the Number of Channels for the Convolutional Neural Network

no code implementations4 Sep 2019 Hui Zhu, Zhulin An, Chuanguang Yang, Xiaolong Hu, Kaiqiang Xu, Yongjun Xu

In this paper, we propose a method for efficient automatic architecture search which is special to the widths of networks instead of the connections of neural architecture.

Neural Architecture Search

Gated Convolutional Networks with Hybrid Connectivity for Image Classification

1 code implementation26 Aug 2019 Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu

We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.

Adversarial Defense Classification +3

Multi-Objective Pruning for CNNs Using Genetic Algorithm

no code implementations2 Jun 2019 Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu

In this work, we propose a heuristic genetic algorithm (GA) for pruning convolutional neural networks (CNNs) according to the multi-objective trade-off among error, computation and sparsity.

EENA: Efficient Evolution of Neural Architecture

1 code implementation10 May 2019 Hui Zhu, Zhulin An, Chuanguang Yang, Kaiqiang Xu, Erhu Zhao, Yongjun Xu

Latest algorithms for automatic neural architecture search perform remarkable but are basically directionless in search space and computational expensive in training of every intermediate architecture.

General Classification Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.