1 code implementation • 29 Jul 2022 • Itsuki Ueda, Yoshihiro Fukuhara, Hirokatsu Kataoka, Hiroaki Aizawa, Hidehiko Shishido, Itaru Kitahara
However, it is difficult to achieve high localization performance by only density fields-based methods such as Neural Radiance Field (NeRF) since they do not provide density gradient in most empty regions.
1 code implementation • 31 Jul 2021 • Yoshiki Kubotani, Yoshihiro Fukuhara, Shigeo Morishima
However, optimization using reinforcement learning requires a large number of interactions, and thus it cannot be applied directly to actual students.
1 code implementation • 26 Apr 2023 • Akihiro Fujii, Hideki Tsunashima, Yoshihiro Fukuhara, Koji Shimizu, Satoshi Watanabe
In this study, we investigated the impact of surrogate simulators' accuracy on the solutions and discovered that the more accurate the surrogate simulator is, the better the solutions become.
1 code implementation • 19 May 2019 • Takahiro Itazuri, Yoshihiro Fukuhara, Hirokatsu Kataoka, Shigeo Morishima
In this paper, we address the open question: "What do adversarially robust models look at?"
no code implementations • 22 Sep 2018 • Ryota Natsume, Kazuki Inoue, Yoshihiro Fukuhara, Shintaro Yamamoto, Shigeo Morishima, Hirokatsu Kataoka
Face recognition research is one of the most active topics in computer vision (CV), and deep neural networks (DNN) are now filling the gap between human-level and computer-driven performance levels in face verification algorithms.
no code implementations • 16 Nov 2018 • Shintaro Yamamoto, Yoshihiro Fukuhara, Ryota Suzuki, Shigeo Morishima, Hirokatsu Kataoka
Due to the recent boom in artificial intelligence (AI) research, including computer vision (CV), it has become impossible for researchers in these fields to keep up with the exponentially increasing number of manuscripts.
no code implementations • 24 Oct 2020 • Masahiro Kato, Zhenghang Cui, Yoshihiro Fukuhara
In this paper, in order to acquire a more reliable classifier against adversarial attacks, we propose the method of Adversarial Training with a Rejection Option (ATRO).
no code implementations • 25 Sep 2019 • Masahiro Kato, Yoshihiro Fukuhara, Hirokatsu Kataoka, Shigeo Morishima
Our main idea is to apply a framework of learning with rejection and adversarial examples to assist in the decision making for such suspicious samples.
no code implementations • 6 Mar 2023 • Gido Kato, Yoshihiro Fukuhara, Mariko Isogawa, Hideki Tsunashima, Hirokatsu Kataoka, Shigeo Morishima
To protect privacy and prevent malicious use of deepfake, current studies propose methods that interfere with the generation process, such as detection and destruction approaches.