no code implementations • 24 Jul 2015 • Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki, Jun Sakuma
One is the query to count outliers, which reports the number of outliers that appear in a given subspace.
no code implementations • 20 Nov 2018 • Hajime Ono, Tsubasa Takahashi, Kazuya Kakizaki
Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization.
no code implementations • ICML 2017 • Kazuya Kakizaki, Kazuto Fukuchi, Jun Sakuma
This paper develops differentially private mechanisms for $\chi^2$ test of independence.
no code implementations • 9 May 2019 • Kazuya Kakizaki, Kosuke Yoshida
Thanks to recent advances in deep neural networks (DNNs), face recognition systems have become highly accurate in classifying a large number of face images.
no code implementations • 29 Sep 2021 • Inderjeet Singh, Satoru Momiyama, Kazuya Kakizaki, Toshinori Araki
This paper introduces a novel adversarial example generation method against face recognition systems (FRSs).
no code implementations • 2 Oct 2021 • Takuma Amada, Seng Pei Liew, Kazuya Kakizaki, Toshinori Araki
We assess the vulnerabilities of deep face recognition systems for images that falsify/spoof multiple identities simultaneously.
no code implementations • 23 Mar 2022 • Inderjeet Singh, Toshinori Araki, Kazuya Kakizaki
Notably, our smoothness loss results in a 1. 17 and 1. 97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively.
no code implementations • 29 Nov 2022 • Inderjeet Singh, Kazuya Kakizaki, Toshinori Araki
Deep Metric Learning (DML) is a prominent field in machine learning with extensive practical applications that concentrate on learning visual similarities.
no code implementations • 11 Apr 2023 • Inderjeet Singh, Kazuya Kakizaki, Toshinori Araki
In this work, we investigate the potential threat of adversarial examples to the security of face recognition systems.