Search Results for author: Kazuto Fukuchi

Found 12 papers, 2 papers with code

Black-Box Min--Max Continuous Optimization Using CMA-ES with Worst-case Ranking Approximation

no code implementations6 Apr 2022 Atsuhiro Miyagi, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

(I) As the influence of the interaction term between $x$ and $y$ (e. g., $x^\mathrm{T} B y$) on the Lipschitz smooth and strongly convex-concave function $f$ increases, the approaches converge to an optimal solution at a slower rate.

Unsupervised Causal Binary Concepts Discovery with VAE for Black-box Model Explanation

no code implementations9 Sep 2021 Thien Q. Tran, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma

The challenge is that we have to discover in an unsupervised manner a set of concepts, i. e., A, B and C, that is useful for the explaining the classifier.

Level Generation for Angry Birds with Sequential VAE and Latent Variable Evolution

1 code implementation13 Apr 2021 Takumi Tanabe, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

When ML techniques are applied to game domains with non-tile-based level representation, such as Angry Birds, where objects in a level are specified by real-valued parameters, ML often fails to generate playable levels.

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

no code implementations2 Mar 2021 Daiki Morinaga, Kazuto Fukuchi, Jun Sakuma, Youhei Akimoto

The convergence rate, that is, the decrease rate of the distance from a search point $m_t$ to the optimal solution $x^*$, is proven to be in $O(\exp( - L / \mathrm{Tr}(H) ))$, where $L$ is the smallest eigenvalue of $H$ and $\mathrm{Tr}(H)$ is the trace of $H$.

Locally Differentially Private Minimum Finding

no code implementations27 May 2019 Kazuto Fukuchi, Chia-Mu Yu, Arashi Haishima, Jun Sakuma

Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum.

Faking Fairness via Stealthily Biased Sampling

1 code implementation24 Jan 2019 Kazuto Fukuchi, Satoshi Hara, Takanori Maehara

The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities.


Unauthorized AI cannot Recognize Me: Reversible Adversarial Example

no code implementations1 Nov 2018 Jiayang Liu, Weiming Zhang, Kazuto Fukuchi, Youhei Akimoto, Jun Sakuma

In this study, we propose a new methodology to control how user's data is recognized and used by AI via exploiting the properties of adversarial examples.

Adversarial Attack General Classification +2

Differentially Private Empirical Risk Minimization with Input Perturbation

no code implementations20 Oct 2017 Kazuto Fukuchi, Quang Khai Tran, Jun Sakuma

Existing differentially private ERM implicitly assumed that the data contributors submit their private data to a database expecting that the database invokes a differentially private mechanism for publication of the learned model.

Differentially Private Chi-squared Test by Unit Circle Mechanism

no code implementations ICML 2017 Kazuya Kakizaki, Kazuto Fukuchi, Jun Sakuma

This paper develops differentially private mechanisms for $\chi^2$ test of independence.

Neutralized Empirical Risk Minimization with Generalization Neutrality Bound

no code implementations6 Nov 2015 Kazuto Fukuchi, Jun Sakuma

Currently, machine learning plays an important role in the lives and individual activities of numerous people.

Decision Making General Classification

Differentially Private Analysis of Outliers

no code implementations24 Jul 2015 Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki, Jun Sakuma

One is the query to count outliers, which reports the number of outliers that appear in a given subspace.

Outlier Detection

Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

no code implementations25 Jun 2015 Kazuto Fukuchi, Jun Sakuma

In this paper, we propose a general framework for fairness-aware learning that uses f-divergences and that covers most of the dependency measures employed in the existing methods.


Cannot find the paper you are looking for? You can Submit a new open access paper.