no code implementations • 10 Oct 2023 • Hirofumi Suzuki, Hiroaki Iwashita, Takuya Takagi, Yuta Fujishige, Satoshi Hara
In this study, we consider scenarios where developers should be careful to change the prediction results by the model correction, such as when the model is part of a complex system or software.
1 code implementation • 5 Oct 2022 • Naoyuki Terashita, Satoshi Hara
As a result, the hyper-gradient estimator derived from our optimality condition enjoys two desirable properties; (i) it only requires Push-Sum communication of vectors and (ii) it can operate over time-varying directed networks.
1 code implementation • 30 May 2022 • Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh
SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.
1 code implementation • NeurIPS 2021 • Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara
In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.
2 code implementations • ICLR 2021 • Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, Kentaro Inui
In this study, we investigated relevance metrics that can provide reasonable explanations to users.
no code implementations • 10 Feb 2020 • Danqing Pan, Tong Wang, Satoshi Hara
We present an interpretable companion model for any pre-trained black-box classifiers.
1 code implementation • NeurIPS 2019 • Satoshi Hara, Atsushi Nitanda, Takanori Maehara
Data cleansing is a typical approach used to improve the accuracy of machine learning models, which, however, requires extensive domain knowledge to identify the influential instances that affect the models.
no code implementations • 5 Jun 2019 • Kentaro Kanamori, Satoshi Hara, Masakazu Ishihata, Hiroki Arimura
In this paper, we propose a K-best model enumeration algorithm for Support Vector Machines (SVM) that given a dataset S and an integer K>0, enumerates the K-best models on S with distinct support vectors in the descending order of the objective function values in the dual SVM problem.
1 code implementation • 28 Jan 2019 • Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.
2 code implementations • 24 Jan 2019 • Kazuto Fukuchi, Satoshi Hara, Takanori Maehara
The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities.
1 code implementation • 14 Oct 2018 • Satoshi Hara, Takanori Maehara
To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions.
no code implementations • 27 Sep 2018 • Satoshi Hara, Koichi Ikeno, Tasuku Soma, Takanori Maehara
In this study, we formalize the feature attribution problem as a feature selection problem.
1 code implementation • 12 Jul 2018 • Kouichi Ikeno, Satoshi Hara
Feature attribution methods, or saliency maps, are one of the most popular approaches for explaining the decisions of complex machine learning models such as deep neural networks.
1 code implementation • 19 Jun 2018 • Satoshi Hara, Kouichi Ikeno, Tasuku Soma, Takanori Maehara
In adversarial example, one seeks the smallest data perturbation that changes the model's output.
no code implementations • 23 Dec 2017 • Hirofumi Ohta, Satoshi Hara
We then estimate the conditional mode by finding the maximum of the estimated conditional density.
no code implementations • 31 Jul 2017 • Satoshi Hara, Takayuki Katsuki, Hiroki Yanagisawa, Masaaki Imaizumi, Takafumi Ono, Ryo Okamoto, Shigeki Takeuchi
We show that the proposed method is computationally efficient and does not require any extra computation for model selection.
1 code implementation • 18 Nov 2016 • Satoshi Hara, Takanori Maehara
We propose a method for finding alternate features missing in the Lasso optimal solution.
1 code implementation • 29 Jun 2016 • Satoshi Hara, Kohei Hayashi
In this study, we present a method to make a complex tree ensemble interpretable by simplifying the model.
no code implementations • 17 Jun 2016 • Satoshi Hara, Kohei Hayashi
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited.
no code implementations • 20 Jan 2014 • Satoshi Hara, Takafumi Ono, Ryo Okamoto, Takashi Washio, Shigeki Takeuchi
We demonstrate that the proposed method can more accurately detect small erroneous deviations in reconstructed density matrices, which contain intrinsic fluctuations due to the limited number of samples, than a naive method of checking the trace distance from the average of the given density matrices.