Search Results for author: Boyu Diao

Found 5 papers, 1 papers with code

E2Net: Resource-Efficient Continual Learning with Elastic Expansion Network

1 code implementation28 Sep 2023 Ruiqi Liu, Boyu Diao, Libo Huang, Zhulin An, Yongjun Xu

In E2Net, we propose Representative Network Distillation to identify the representative core subnet by assessing parameter quantity and output similarity with the working network, distilling analogous subnets within the working network to mitigate reliance on rehearsal buffers and facilitating knowledge transfer across previous tasks.

Continual Learning Transfer Learning

eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation

no code implementations20 Apr 2023 Libo Huang, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, Yongjun Xu

Most successful CIL methods incrementally train a feature extractor with the aid of stored exemplars, or estimate the feature distribution with the stored prototypes.

Class Incremental Learning Incremental Learning

Towards Understanding the Generalization of Deepfake Detectors from a Game-Theoretical View

no code implementations ICCV 2023 Kelu Yao, Jin Wang, Boyu Diao, Chao Li

Deepfake detectors encode multi-order interactions among visual concepts, in which the low-order interactions usually have substantially negative contributions to deepfake detection.

DeepFake Detection Face Swapping

PFGDF: Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration

no code implementations23 Jun 2020 Jianrong Xu, Boyu Diao, Bifeng Cui, Kang Yang, Chao Li, Yongjun Xu

Deep learning has achieved impressive results in many areas, but the deployment of edge intelligent devices is still very slow.

Model Compression

Multi-Objective Pruning for CNNs Using Genetic Algorithm

no code implementations2 Jun 2019 Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu

In this work, we propose a heuristic genetic algorithm (GA) for pruning convolutional neural networks (CNNs) according to the multi-objective trade-off among error, computation and sparsity.

Cannot find the paper you are looking for? You can Submit a new open access paper.