no code implementations • 21 Jul 2022 • Sekitoshi Kanai, Shin'ya Yamaguchi, Masanori Yamada, Hiroshi Takahashi, Kentaro Ohno, Yasutoshi Ida
This paper proposes a new loss function for adversarial training.
no code implementations • 2 Mar 2021 • Sekitoshi Kanai, Masanori Yamada, Hiroshi Takahashi, Yuki Yamanaka, Yasutoshi Ida
We reveal that the constraint of adversarial attacks is one cause of the non-smoothness and that the smoothness depends on the types of the constraints.
no code implementations • 5 Feb 2021 • Masanori Yamada, Sekitoshi Kanai, Tomoharu Iwata, Tomokatsu Takahashi, Yuki Yamanaka, Hiroshi Takahashi, Atsutoshi Kumagai
We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model.
no code implementations • 6 Oct 2020 • Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida
We theoretically and empirically reveal that small logits by addition of a common activation function, e. g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms.
no code implementations • 8 Jul 2020 • Matthias Raddant, Hiroshi Takahashi
We investigate the networks of Japanese corporate boards and its influence on the appointments of female board members.
no code implementations • 26 Mar 2019 • Yuki Yamanaka, Tomoharu Iwata, Hiroshi Takahashi, Masanori Yamada, Sekitoshi Kanai
Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points.
1 code implementation • 14 Sep 2018 • Hiroshi Takahashi, Tomoharu Iwata, Yuki Yamanaka, Masanori Yamada, Satoshi Yagi
However, KL divergence with the aggregated posterior cannot be calculated in a closed form, which prevents us from using this optimal prior.