Search Results for author: Satoshi Hara

Found 20 papers, 12 papers with code

Rule Mining for Correcting Classification Models

no code implementations10 Oct 2023 Hirofumi Suzuki, Hiroaki Iwashita, Takuya Takagi, Yuta Fujishige, Satoshi Hara

In this study, we consider scenarios where developers should be careful to change the prediction results by the model correction, such as when the model is part of a complex system or software.

Classification

Decentralized Hyper-Gradient Computation over Time-Varying Directed Networks

1 code implementation5 Oct 2022 Naoyuki Terashita, Satoshi Hara

As a result, the hyper-gradient estimator derived from our optimality condition enjoys two desirable properties; (i) it only requires Push-Sum communication of vectors and (ii) it can operate over time-varying directed networks.

Bilevel Optimization Federated Learning

Fool SHAP with Stealthily Biased Sampling

1 code implementation30 May 2022 Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh

SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.

Fairness

Characterizing the risk of fairwashing

1 code implementation NeurIPS 2021 Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara

In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.

Fairness

Evaluation of Similarity-based Explanations

2 code implementations ICLR 2021 Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, Kentaro Inui

In this study, we investigated relevance metrics that can provide reasonable explanations to users.

Interpretable Companions for Black-Box Models

no code implementations10 Feb 2020 Danqing Pan, Tong Wang, Satoshi Hara

We present an interpretable companion model for any pre-trained black-box classifiers.

Data Cleansing for Models Trained with SGD

1 code implementation NeurIPS 2019 Satoshi Hara, Atsushi Nitanda, Takanori Maehara

Data cleansing is a typical approach used to improve the accuracy of machine learning models, which, however, requires extensive domain knowledge to identify the influential instances that affect the models.

BIG-bench Machine Learning

Enumeration of Distinct Support Vectors for Interactive Decision Making

no code implementations5 Jun 2019 Kentaro Kanamori, Satoshi Hara, Masakazu Ishihata, Hiroki Arimura

In this paper, we propose a K-best model enumeration algorithm for Support Vector Machines (SVM) that given a dataset S and an integer K>0, enumerates the K-best models on S with distinct support vectors in the descending order of the objective function values in the dual SVM problem.

BIG-bench Machine Learning Decision Making +1

Fairwashing: the risk of rationalization

1 code implementation28 Jan 2019 Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp

Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.

BIG-bench Machine Learning Fairness

Faking Fairness via Stealthily Biased Sampling

2 code implementations24 Jan 2019 Kazuto Fukuchi, Satoshi Hara, Takanori Maehara

The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities.

Fairness

Convex Hull Approximation of Nearly Optimal Lasso Solutions

1 code implementation14 Oct 2018 Satoshi Hara, Takanori Maehara

To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions.

feature selection

Feature Attribution As Feature Selection

no code implementations27 Sep 2018 Satoshi Hara, Koichi Ikeno, Tasuku Soma, Takanori Maehara

In this study, we formalize the feature attribution problem as a feature selection problem.

feature selection

Maximizing Invariant Data Perturbation with Stochastic Optimization

1 code implementation12 Jul 2018 Kouichi Ikeno, Satoshi Hara

Feature attribution methods, or saliency maps, are one of the most popular approaches for explaining the decisions of complex machine learning models such as deep neural networks.

General Classification Image Classification +1

Maximally Invariant Data Perturbation as Explanation

1 code implementation19 Jun 2018 Satoshi Hara, Kouichi Ikeno, Tasuku Soma, Takanori Maehara

In adversarial example, one seeks the smallest data perturbation that changes the model's output.

Image Classification

On Estimation of Conditional Modes Using Multiple Quantile Regressions

no code implementations23 Dec 2017 Hirofumi Ohta, Satoshi Hara

We then estimate the conditional mode by finding the maximum of the estimated conditional density.

Finding Alternate Features in Lasso

1 code implementation18 Nov 2016 Satoshi Hara, Takanori Maehara

We propose a method for finding alternate features missing in the Lasso optimal solution.

Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach

1 code implementation29 Jun 2016 Satoshi Hara, Kohei Hayashi

In this study, we present a method to make a complex tree ensemble interpretable by simplifying the model.

Model Selection

Making Tree Ensembles Interpretable

no code implementations17 Jun 2016 Satoshi Hara, Kohei Hayashi

Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited.

Anomaly detection in reconstructed quantum states using a machine-learning technique

no code implementations20 Jan 2014 Satoshi Hara, Takafumi Ono, Ryo Okamoto, Takashi Washio, Shigeki Takeuchi

We demonstrate that the proposed method can more accurately detect small erroneous deviations in reconstructed density matrices, which contain intrinsic fluctuations due to the limited number of samples, than a naive method of checking the trace distance from the average of the given density matrices.

Anomaly Detection BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.