Search Results for author: Naoto Yanai

Found 10 papers, 5 papers with code

JABBERWOCK: A Tool for WebAssembly Dataset Generation and Its Application to Malicious Website Detection

1 code implementation9 Jun 2023 Chika Komiya, Naoto Yanai, Kyosuke Yamashita, Shingo Okamura

We also conduct experimental evaluations of JABBERWOCK in terms of the processing time for dataset generation, comparison of the generated samples with actual WebAssembly samples gathered from the Internet, and an application for malicious website detection.

Privacy-Preserving Taxi-Demand Prediction Using Federated Learning

no code implementations14 May 2023 Yumeki Goto, Tomoya Matsumoto, Hamada Rizk, Naoto Yanai, Hirozumi Yamaguchi

Taxi-demand prediction is an important application of machine learning that enables taxi-providing facilities to optimize their operations and city planners to improve transportation infrastructure and services.

Federated Learning Privacy Preserving

Do Backdoors Assist Membership Inference Attacks?

no code implementations22 Mar 2023 Yumeki Goto, Nami Ashizawa, Toshiki Shibahara, Naoto Yanai

When an adversary provides poison samples to a machine learning model, privacy leakage, such as membership inference attacks that infer whether a sample was included in the training of the model, becomes effective by moving the sample to an outlier.

Inference Attack Membership Inference Attack

Membership Inference Attacks against Diffusion Models

1 code implementation7 Feb 2023 Tomoya Matsumoto, Takayuki Miura, Naoto Yanai

We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i. e., time steps, sampling steps, and sampling variances.

Generative Adversarial Network Inference Attack +1

First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data

no code implementations30 Sep 2021 Masataka Tasumi, Kazuki Iwahana, Naoto Yanai, Katsunari Shishido, Toshiya Shimizu, Yuji Higuchi, Ikuya Morikawa, Jun Yajima

Whereas model extraction is more challenging on tabular data due to normalization, TEMPEST no longer needs initial samples that previous attacks require; instead, it makes use of publicly available statistics to generate query samples.

Medical Diagnosis Model extraction

Eth2Vec: Learning Contract-Wide Code Representations for Vulnerability Detection on Ethereum Smart Contracts

1 code implementation7 Jan 2021 Nami Ashizawa, Naoto Yanai, Jason Paul Cruz, Shingo Okamura

Therefore, Eth2Vec can detect vulnerabilities in smart contracts by comparing the code similarity between target EVM bytecodes and the EVM bytecodes it already learned.

BIG-bench Machine Learning Vulnerability Detection

Self-Organizing Map assisted Deep Autoencoding Gaussian Mixture Model for Intrusion Detection

1 code implementation28 Aug 2020 Yang Chen, Nami Ashizawa, Seanglidet Yean, Chai Kiat Yeo, Naoto Yanai

In this paper, we propose a self-organizing map assisted deep autoencoding Gaussian mixture model (SOMDAGMM) supplemented with well-preserved input space topology for more accurate network intrusion detection.

Network Intrusion Detection

Hunting for Re-Entrancy Attacks in Ethereum Smart Contracts via Static Analysis

1 code implementation2 Jul 2020 Yuichiro Chinen, Naoto Yanai, Jason Paul Cruz, Shingo Okamura

Ethereum smart contracts are programs that are deployed and executed in a consensus-based blockchain managed by a peer-to-peer network.

Cryptography and Security

Model Extraction Attacks against Recurrent Neural Networks

no code implementations1 Feb 2020 Tatsuya Takemura, Naoto Yanai, Toru Fujiwara

First, in a case of a classification problem, such as image recognition, extraction of an RNN model without final outputs from an LSTM model is presented by utilizing outputs halfway through the sequence.

Model extraction Time Series Analysis +1

MOBIUS: Model-Oblivious Binarized Neural Networks

no code implementations29 Nov 2018 Hiromasa Kitai, Jason Paul Cruz, Naoto Yanai, Naohisa Nishida, Tatsumi Oba, Yuji Unagami, Tadanori Teruya, Nuttapong Attrapadung, Takahiro Matsuda, Goichiro Hanaoka

A privacy-preserving framework in which a computational resource provider receives encrypted data from a client and returns prediction results without decrypting the data, i. e., oblivious neural network or encrypted prediction, has been studied in machine learning that provides prediction services.

BIG-bench Machine Learning Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.