no code implementations • 22 Mar 2023 • Yumeki Goto, Nami Ashizawa, Toshiki Shibahara, Naoto Yanai
When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier.
no code implementations • 19 Jul 2021 • Takayuki Miura, Satoshi Hasegawa, Toshiki Shibahara
In this method, an adversary uses the explanations to train the generative model and reduces the number of queries to steal the model.