Search Results for author: Daiki Chijiwa

Found 7 papers, 2 papers with code

Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks

no code implementations15 Mar 2024 Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa

To this end, AdaRand minimizes the gap between feature vectors and random reference vectors that are sampled from class conditional Gaussian distributions.

Transferring Learning Trajectories of Neural Networks

no code implementations23 May 2023 Daiki Chijiwa

Training deep neural networks (DNNs) is computationally expensive, which is problematic especially when performing duplicated or similar training runs in model ensemble or fine-tuning pre-trained models, for example.

Knowledge Distillation

Transfer Learning with Pre-trained Conditional Generative Models

no code implementations27 Apr 2022 Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima

To transfer source knowledge without these assumptions, we propose a transfer learning method that uses deep generative models and is composed of the following two stages: pseudo pre-training (PP) and pseudo semi-supervised learning (P-SSL).

Knowledge Distillation Transfer Learning

Pruning Randomly Initialized Neural Networks with Iterative Randomization

1 code implementation NeurIPS 2021 Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue

Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis.

Cannot find the paper you are looking for? You can Submit a new open access paper.