Search Results for author: Toshiki Shibahara

Found 2 papers, 0 papers with code

Do Backdoors Assist Membership Inference Attacks?

no code implementations22 Mar 2023 Yumeki Goto, Nami Ashizawa, Toshiki Shibahara, Naoto Yanai

When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier.

Inference Attack Membership Inference Attack

MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI

no code implementations19 Jul 2021 Takayuki Miura, Satoshi Hasegawa, Toshiki Shibahara

In this method, an adversary uses the explanations to train the generative model and reduces the number of queries to steal the model.

Explainable artificial intelligence Model extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.