Search Results for author: Yasutoshi Ida

Found 12 papers, 2 papers with code

Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers

no code implementations14 Mar 2023 Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara

Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains.

Unsupervised Domain Adaptation

Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

no code implementations4 Oct 2022 Kentaro Ohno, Sekitoshi Kanai, Yasutoshi Ida

We prove that the gradient vanishing of the gate function can be mitigated by accelerating the convergence of the saturating function, i. e., making the output of the function converge to 0 or 1 faster.

Computational Efficiency Time Series +1

Pruning Randomly Initialized Neural Networks with Iterative Randomization

1 code implementation NeurIPS 2021 Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro Inoue

Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis.

Smoothness Analysis of Adversarial Training

no code implementations2 Mar 2021 Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida

We reveal that the constraint of adversarial attacks is one cause of the non-smoothness and that the smoothness depends on the types of the constraints.

Adversarial Robustness

Constraining Logits by Bounded Function for Adversarial Robustness

no code implementations6 Oct 2020 Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida

We theoretically and empirically reveal that small logits by addition of a common activation function, e. g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms.

Adversarial Robustness

Fast Sparse Group Lasso

no code implementations NeurIPS 2019 Yasutoshi Ida, Yasuhiro Fujiwara, Hisashi Kashima

Block Coordinate Descent is a standard approach to obtain the parameters of Sparse Group Lasso, and iteratively updates the parameters for each parameter group.

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

no code implementations19 Sep 2019 Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi

Furthermore, we reveal that robust CNNs with Absum are more robust against transferred attacks due to decreasing the common sensitivity and against high-frequency noise than standard regularization methods.

Adversarial Attack Adversarial Robustness

Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining

no code implementations10 Jun 2019 Yasutoshi Ida, Yasuhiro Fujiwara

Our key idea is to introduce a priority term that identifies the importance of a layer; we can select unimportant layers according to the priority and erase them after the training.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.