Search Results for author: Jun Sakuma

Found 23 papers, 6 papers with code

Black-Box Min--Max Continuous Optimization Using CMA-ES with Worst-case Ranking Approximation

no code implementations6 Apr 2022 Atsuhiro Miyagi, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

(I) As the influence of the interaction term between $x$ and $y$ (e. g., $x^\mathrm{T} B y$) on the Lipschitz smooth and strongly convex-concave function $f$ increases, the approaches converge to an optimal solution at a slower rate.

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

1 code implementation22 Mar 2022 Yuwei Sun, Hideya Ochiai, Jun Sakuma

To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.

Backdoor Attack Federated Learning +3

Unsupervised Causal Binary Concepts Discovery with VAE for Black-box Model Explanation

no code implementations9 Sep 2021 Thien Q. Tran, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma

The challenge is that we have to discover in an unsupervised manner a set of concepts, i. e., A, B and C, that is useful for the explaining the classifier.

Application of Adversarial Examples to Physical ECG Signals

no code implementations20 Aug 2021 Taiga Ono, Takeshi Sugawara, Jun Sakuma, Tatsuya Mori

To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup.

Adversarial Attack ECG Classification

Level Generation for Angry Birds with Sequential VAE and Latent Variable Evolution

1 code implementation13 Apr 2021 Takumi Tanabe, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

When ML techniques are applied to game domains with non-tile-based level representation, such as Angry Birds, where objects in a level are specified by real-valued parameters, ML often fails to generate playable levels.

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

no code implementations2 Mar 2021 Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$.

AdvantageNAS: Efficient Neural Architecture Search with Credit Assignment

1 code implementation11 Dec 2020 Rei Sato, Jun Sakuma, Youhei Akimoto

In this paper, we propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS, which further reduces the time complexity of NAS by reducing the number of search iterations.

Neural Architecture Search

Seasonal-adjustment Based Feature Selection Method for Large-scale Search Engine Logs

no code implementations22 Aug 2020 Thien Q. Tran, Jun Sakuma

We also carefully design a feature selection method to select proper search terms to predict each component.

feature selection Time Series

Generate (non-software) Bugs to Fool Classifiers

1 code implementation20 Nov 2019 Hiromu Yakura, Youhei Akimoto, Jun Sakuma

We first show the feasibility of this approach in an attack against an image classifier by employing generative adversarial networks that produce image patches that have the appearance of a natural object to fool the target model.

Locally Differentially Private Minimum Finding

no code implementations27 May 2019 Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma

Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.

Unauthorized AI cannot Recognize Me: Reversible Adversarial Example

no code implementations1 Nov 2018 Jiayang Liu, Weiming Zhang, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma

In this study, we propose a new methodology to control how user's data is recognized and used by AI via exploiting the properties of adversarial examples.

Adversarial Attack General Classification +2

Robust Audio Adversarial Example for a Physical Attack

1 code implementation28 Oct 2018 Hiromu Yakura, Jun Sakuma

We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world.

Speech Recognition

Interval-based Prediction Uncertainty Bound Computation in Learning with Missing Values

no code implementations1 Mar 2018 Hiroyuki Hanada, Toshiyuki Takada, Jun Sakuma, Ichiro Takeuchi

A drawback of this naive approach is that the uncertainty in the missing entries is not properly incorporated in the prediction.

Imputation

Classifier-to-Generator Attack: Estimation of Training Data Distribution from Classifier

no code implementations ICLR 2018 Kosuke Kusano, Jun Sakuma

In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.

Classification Face Recognition +2

Differentially Private Empirical Risk Minimization with Input Perturbation

no code implementations20 Oct 2017 Kazuto Fukuchi, Quang Khai Tran, Jun Sakuma

Existing differentially private ERM implicitly assumed that the data contributors submit their private data to a database expecting that the database invokes a differentially private mechanism for publication of the learned model.

Differentially Private Chi-squared Test by Unit Circle Mechanism

no code implementations ICML 2017 Kazuya Kakizaki, Kazuto Fukuchi, Jun Sakuma

This paper develops differentially private mechanisms for $\chi^2$ test of independence.

Recommendation with k-anonymized Ratings

no code implementations6 Jun 2017 Jun Sakuma, Tatsuya Osame

In this way, the predictive performance of recommendations based on anonymized ratings can be improved in some settings.

Collaborative Filtering Recommendation Systems

Efficiently Bounding Optimal Solutions after Small Data Modification in Large-Scale Empirical Risk Minimization

no code implementations1 Jun 2016 Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi

We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier.

General Classification Small Data Image Classification

Secure Approximation Guarantee for Cryptographically Private Empirical Risk Minimization

no code implementations15 Feb 2016 Toshiyuki Takada, Hiroyuki Hanada, Yoshiji Yamada, Jun Sakuma, Ichiro Takeuchi

The key property of SAG method is that, given an arbitrary approximate solution, it can provide a non-probabilistic assumption-free bound on the approximation quality under cryptographically secure computation framework.

Neutralized Empirical Risk Minimization with Generalization Neutrality Bound

no code implementations6 Nov 2015 Kazuto Fukuchi, Jun Sakuma

Currently, machine learning plays an important role in the lives and individual activities of numerous people.

Decision Making General Classification

Differentially Private Analysis of Outliers

no code implementations24 Jul 2015 Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki, Jun Sakuma

One is the query to count outliers, which reports the number of outliers that appear in a given subspace.

Outlier Detection

Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

no code implementations25 Jun 2015 Kazuto Fukuchi, Jun Sakuma

In this paper, we propose a general framework for fairness-aware learning that uses f-divergences and that covers most of the dependency measures employed in the existing methods.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.