Search Results for author: Kakeru Mitsuno

Found 3 papers, 1 papers with code

Channel Planting for Deep Neural Networks using Knowledge Distillation

no code implementations4 Nov 2020 Kakeru Mitsuno, Yuichiro Nomura, Takio Kurita

Our planting can search the optimal network architecture with smaller number of parameters for improving the network performance by augmenting channels incrementally to layers of the initial networks while keeping the earlier trained parameters fixed.

Knowledge Distillation Network Pruning +1

Filter Pruning using Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks

no code implementations4 Nov 2020 Kakeru Mitsuno, Takio Kurita

It is shown that the proposed method can reduce more than 50% parameters of ResNet for CIFAR-10 with only 0. 3% decrease in the accuracy of test samples.

Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks

1 code implementation9 Apr 2020 Kakeru Mitsuno, Junichi Miyao, Takio Kurita

As a result, we can prune the weights more adequately depending on the structure of the network and the number of channels keeping high performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.