Search Results for author: Binhang Qi

Found 3 papers, 3 papers with code

Modularizing while Training: A New Paradigm for Modularizing DNN Models

1 code implementation15 Jun 2023 Binhang Qi, Hailong Sun, Hongyu Zhang, Ruobing Zhao, Xiang Gao

In this paper, we propose a novel approach that incorporates modularization into the model training process, i. e., modularizing-while-training (MwT).

Reusing Deep Neural Network Models through Model Re-engineering

1 code implementation1 Apr 2023 Binhang Qi, Hailong Sun, Xiang Gao, Hongyu Zhang, Zhaotian Li, Xudong Liu

Prior approaches to DNN model reuse have two main limitations: 1) reusing the entire model, while only a small part of the model's functionalities (labels) are required, would cause much overhead (e. g., computational and time costs for inference), and 2) model reuse would inherit the defects and weaknesses of the reused model, and hence put the new system under threats of security attack.

Patching Weak Convolutional Neural Network Models through Modularization and Composition

1 code implementation11 Sep 2022 Binhang Qi, Hailong Sun, Xiang Gao, Hongyu Zhang

To patch a weak CNN model that performs unsatisfactorily on a target class (TC), we compose the weak CNN model with the corresponding module obtained from a strong CNN model.

Cannot find the paper you are looking for? You can Submit a new open access paper.