no code implementations • 6 Apr 2022 • Atsuhiro Miyagi, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
(I) As the influence of the interaction term between $x$ and $y$ (e. g., $x^\mathrm{T} B y$) on the Lipschitz smooth and strongly convex-concave function $f$ increases, the approaches converge to an optimal solution at a slower rate.
1 code implementation • 22 Mar 2022 • Yuwei Sun, Hideya Ochiai, Jun Sakuma
To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.
Ranked #1 on
Model Poisoning
on Fashion-MNIST
no code implementations • 9 Sep 2021 • Thien Q. Tran, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
The challenge is that we have to discover in an unsupervised manner a set of concepts, i. e., A, B and C, that is useful for the explaining the classifier.
no code implementations • 20 Aug 2021 • Taiga Ono, Takeshi Sugawara, Jun Sakuma, Tatsuya Mori
To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup.
1 code implementation • 13 Apr 2021 • Takumi Tanabe, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
When ML techniques are applied to game domains with non-tile-based level representation, such as Angry Birds, where objects in a level are specified by real-valued parameters, ML often fails to generate playable levels.
no code implementations • 2 Mar 2021 • Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto
The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$.
1 code implementation • 11 Dec 2020 • Rei Sato, Jun Sakuma, Youhei Akimoto
In this paper, we propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS, which further reduces the time complexity of NAS by reducing the number of search iterations.
no code implementations • 22 Aug 2020 • Thien Q. Tran, Jun Sakuma
We also carefully design a feature selection method to select proper search terms to predict each component.
1 code implementation • 20 Nov 2019 • Hiromu Yakura, Youhei Akimoto, Jun Sakuma
We first show the feasibility of this approach in an attack against an image classifier by employing generative adversarial networks that produce image patches that have the appearance of a natural object to fool the target model.
no code implementations • 27 May 2019 • Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma
Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.
1 code implementation • 28 Nov 2018 • Tatsuki Koga, Naoki Nonaka, Jun Sakuma, Jun Seita
Deep learning has significant potential for medical imaging.
no code implementations • 1 Nov 2018 • Jiayang Liu, Weiming Zhang, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma
In this study, we propose a new methodology to control how user's data is recognized and used by AI via exploiting the properties of adversarial examples.
1 code implementation • 28 Oct 2018 • Hiromu Yakura, Jun Sakuma
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world.
no code implementations • 1 Mar 2018 • Hiroyuki Hanada, Toshiyuki Takada, Jun Sakuma, Ichiro Takeuchi
A drawback of this naive approach is that the uncertainty in the missing entries is not properly incorporated in the prediction.
no code implementations • ICLR 2018 • Kosuke Kusano, Jun Sakuma
In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.
no code implementations • 20 Oct 2017 • Kazuto Fukuchi, Quang Khai Tran, Jun Sakuma
Existing differentially private ERM implicitly assumed that the data contributors submit their private data to a database expecting that the database invokes a differentially private mechanism for publication of the learned model.
no code implementations • ICML 2017 • Kazuya Kakizaki, Kazuto Fukuchi, Jun Sakuma
This paper develops differentially private mechanisms for $\chi^2$ test of independence.
no code implementations • 6 Jun 2017 • Jun Sakuma, Tatsuya Osame
In this way, the predictive performance of recommendations based on anonymized ratings can be improved in some settings.
no code implementations • 1 Jun 2016 • Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi
We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier.
no code implementations • 15 Feb 2016 • Toshiyuki Takada, Hiroyuki Hanada, Yoshiji Yamada, Jun Sakuma, Ichiro Takeuchi
The key property of SAG method is that, given an arbitrary approximate solution, it can provide a non-probabilistic assumption-free bound on the approximation quality under cryptographically secure computation framework.
no code implementations • 6 Nov 2015 • Kazuto Fukuchi, Jun Sakuma
Currently, machine learning plays an important role in the lives and individual activities of numerous people.
no code implementations • 24 Jul 2015 • Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki, Jun Sakuma
One is the query to count outliers, which reports the number of outliers that appear in a given subspace.
no code implementations • 25 Jun 2015 • Kazuto Fukuchi, Jun Sakuma
In this paper, we propose a general framework for fairness-aware learning that uses f-divergences and that covers most of the dependency measures employed in the existing methods.