Search Results for author: Zhulin An

Found 27 papers, 14 papers with code

Exemplar-Free Class Incremental Learning via Incremental Representation

no code implementations24 Mar 2024 Libo Huang, Zhulin An, Yan Zeng, Chuanguang Yang, Xinqiang Yu, Yongjun Xu

Exemplar-Free Class Incremental Learning (efCIL) aims to continuously incorporate the knowledge from new classes while retaining previously learned information, without storing any old-class exemplars (i. e., samples).

Class Incremental Learning Incremental Learning

E2Net: Resource-Efficient Continual Learning with Elastic Expansion Network

1 code implementation28 Sep 2023 Ruiqi Liu, Boyu Diao, Libo Huang, Zhulin An, Yongjun Xu

In E2Net, we propose Representative Network Distillation to identify the representative core subnet by assessing parameter quantity and output similarity with the working network, distilling analogous subnets within the working network to mitigate reliance on rehearsal buffers and facilitating knowledge transfer across previous tasks.

Continual Learning Transfer Learning

CLIP-KD: An Empirical Study of Distilling CLIP Models

1 code implementation24 Jul 2023 Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu

CLIP has become a promising language-supervised visual pre-training framework and achieves excellent performance over a wide range of tasks.

Contrastive Learning Cross-Modal Retrieval +2

Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation

no code implementations19 Jun 2023 Chuanguang Yang, Xinqiang Yu, Zhulin An, Yongjun Xu

Knowledge Distillation (KD) aims to optimize a lightweight network from the perspective of over-parameterized training.

Knowledge Distillation Relation

Team AcieLee: Technical Report for EPIC-SOUNDS Audio-Based Interaction Recognition Challenge 2023

no code implementations15 Jun 2023 Yuqi Li, Yizhi Luo, Xiaoshuai Hao, Chuanguang Yang, Zhulin An, Dantong Song, Wei Yi

In this report, we describe the technical details of our submission to the EPIC-SOUNDS Audio-Based Interaction Recognition Challenge 2023, by Team "AcieLee" (username: Yuqi\_Li).

Modeling Dual Period-Varying Preferences for Takeaway Recommendation

1 code implementation7 Jun 2023 Yuting Zhang, Yiqing Wu, Ran Le, Yongchun Zhu, Fuzhen Zhuang, Ruidong Han, Xiang Li, Wei Lin, Zhulin An, Yongjun Xu

Different from traditional recommendation, takeaway recommendation faces two main challenges: (1) Dual Interaction-Aware Preference Modeling.

Recommendation Systems

eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation

no code implementations20 Apr 2023 Libo Huang, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, Yongjun Xu

Most successful CIL methods incrementally train a feature extractor with the aid of stored exemplars, or estimate the feature distribution with the stored prototypes.

Class Incremental Learning Incremental Learning

Lung Nodule Segmentation and Uncertain Region Prediction with an Uncertainty-Aware Attention Mechanism

no code implementations15 Mar 2023 Han Yang, Qiuli Wang, Yue Zhang, Zhulin An, Chen Liu, Xiaohong Zhang, S. Kevin Zhou

Radiologists possess diverse training and clinical experiences, leading to variations in the segmentation annotations of lung nodules and resulting in segmentation uncertainty. Conventional methods typically select a single annotation as the learning target or attempt to learn a latent space comprising multiple annotations.

Lung Nodule Segmentation Segmentation

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

1 code implementation11 Aug 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang

MixSKD mutually distills feature maps and probability distributions between the random pair of original images and their mixup images in a meaningful way.

Data Augmentation Image Classification +5

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

2 code implementations23 Jul 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Fuzhen Zhuang, Yongjun Xu, Qian Zhan

This enables each network to learn extra contrastive knowledge from others, leading to better feature representations, thus improving the performance of visual recognition tasks.

Contrastive Learning Image Classification +3

Localizing Semantic Patches for Accelerating Image Classification

1 code implementation7 Jun 2022 Chuanguang Yang, Zhulin An, Yongjun Xu

This ensures the exact mapping from a high-level spatial location to the specific input image patch.

Classification General Classification +1

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

1 code implementation CVPR 2022 Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples.

Knowledge Distillation Segmentation +1

Prior Gradient Mask Guided Pruning-Aware Fine-Tuning

1 code implementation AAAI 2022 Linhang Cai, Zhulin An, Chuanguang Yang, Yangchun Yan, Yongjun Xu

In detail, the proposed PGMPF selectively suppresses the gradient of those ”unimportant” parameters via a prior gradient mask generated by the pruning criterion during fine-tuning.

Image Classification

Lifelong Generative Learning via Knowledge Reconstruction

no code implementations17 Jan 2022 Libo Huang, Zhulin An, Xiang Zhi, Yongjun Xu

Generative models often incur the catastrophic forgetting problem when they are used to sequentially learning multiple tasks, i. e., lifelong generative learning.

Generative Adversarial Network

Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

1 code implementation7 Sep 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

Each auxiliary branch is guided to learn self-supervision augmented task and distill this distribution from teacher to student.

Image Classification Knowledge Distillation +3

Hierarchical Self-supervised Augmented Knowledge Distillation

1 code implementation29 Jul 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We therefore adopt an alternative self-supervised augmented task to guide the network to learn the joint distribution of the original recognition task and self-supervised auxiliary task.

Knowledge Distillation Representation Learning

Mutual Contrastive Learning for Visual Representation Learning

1 code implementation26 Apr 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning.

Contrastive Learning Few-Shot Learning +5

GHFP: Gradually Hard Filter Pruning

no code implementations6 Nov 2020 Linhang Cai, Zhulin An, Yongjun Xu

Filter pruning is widely used to reduce the computation of deep learning, enabling the deployment of Deep Neural Networks (DNNs) in resource-limited devices.

Softer Pruning, Incremental Regularization

no code implementations19 Oct 2020 Linhang Cai, Zhulin An, Chuanguang Yang, Yongjun Xu

Network pruning is widely used to compress Deep Neural Networks (DNNs).

Network Pruning

Multi-view Contrastive Learning for Online Knowledge Distillation

1 code implementation7 Jun 2020 Chuanguang Yang, Zhulin An, Yongjun Xu

Previous Online Knowledge Distillation (OKD) often carries out mutually exchanging probability distributions, but neglects the useful representational knowledge.

Classification Contrastive Learning +4

Localizing Interpretable Multi-scale informative Patches Derived from Media Classification Task

no code implementations31 Jan 2020 Chuanguang Yang, Zhulin An, Xiaolong Hu, Hui Zhu, Yongjun Xu

Deep convolutional neural networks (CNN) always depend on wider receptive field (RF) and more complex non-linearity to achieve state-of-the-art performance, while suffering the increased difficult to interpret how relevant patches contribute the final prediction.

General Classification Image Classification

Towards More Efficient and Effective Inference: The Joint Decision of Multi-Participants

no code implementations19 Jan 2020 Hui Zhu, Zhulin An, Kaiqiang Xu, Xiaolong Hu, Yongjun Xu

Existing approaches to improve the performances of convolutional neural networks by optimizing the local architectures or deepening the networks tend to increase the size of models significantly.

DRNet: Dissect and Reconstruct the Convolutional Neural Network via Interpretable Manners

no code implementations20 Nov 2019 Xiaolong Hu, Zhulin An, Chuanguang Yang, Hui Zhu, Kaiqaing Xu, Yongjun Xu

For VGG16 pre-trained on ImageNet, our method averagely gains 14. 29\% accuracy promotion for two-classes sub-tasks.

Rethinking the Number of Channels for the Convolutional Neural Network

no code implementations4 Sep 2019 Hui Zhu, Zhulin An, Chuanguang Yang, Xiaolong Hu, Kaiqiang Xu, Yongjun Xu

In this paper, we propose a method for efficient automatic architecture search which is special to the widths of networks instead of the connections of neural architecture.

Neural Architecture Search

Gated Convolutional Networks with Hybrid Connectivity for Image Classification

1 code implementation26 Aug 2019 Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu

We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.

Adversarial Defense Classification +2

Multi-Objective Pruning for CNNs Using Genetic Algorithm

no code implementations2 Jun 2019 Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu

In this work, we propose a heuristic genetic algorithm (GA) for pruning convolutional neural networks (CNNs) according to the multi-objective trade-off among error, computation and sparsity.

EENA: Efficient Evolution of Neural Architecture

1 code implementation10 May 2019 Hui Zhu, Zhulin An, Chuanguang Yang, Kaiqiang Xu, Erhu Zhao, Yongjun Xu

Latest algorithms for automatic neural architecture search perform remarkable but are basically directionless in search space and computational expensive in training of every intermediate architecture.

General Classification Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.