no code implementations • 12 Dec 2023 • Hiroya Kato, Kento Hasegawa, Seira Hidano, Kazuhide Fukushima
We focus on the fact that the state-of-the-art poisoning attack on GCL tends to mainly add adversarial edges to create poisoned graphs, which means that pruning edges is important to sanitize the graphs.
no code implementations • 1 Nov 2023 • Jung-Woo Chang, Ke Sun, Nasimeh Heydaribeni, Seira Hidano, Xinyu Zhang, Farinaz Koushanfar
Although there have been a number of adversarial attacks on ML-based wireless systems, the existing methods do not provide a comprehensive view including multi-modality of the source data, common physical layer components, and wireless domain constraints.
1 code implementation • 2 Jun 2023 • Hoang-Quoc Nguyen-Son, Seira Hidano, Kazuhide Fukushima, Shinsaku Kiyomoto, Isao Echizen
Specifically, VoteTRANS detects adversarial text by comparing the hard labels of input text and its transformation.
no code implementations • 4 Apr 2023 • Jung-Woo Chang, Nojan Sheybani, Shehzeen Samarah Hussain, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar
Experimental results demonstrate that NetFlick can successfully deteriorate the performance of video compression frameworks in both digital- and physical-settings and can be further extended to attack downstream video classification networks.
no code implementations • 21 Sep 2022 • Ruisi Zhang, Seira Hidano, Farinaz Koushanfar
Our attacks faithfully reconstruct private texts included in training data with access to the target model.
no code implementations • 18 Mar 2022 • Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar
In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems.
no code implementations • 21 Feb 2022 • Seira Hidano, Takao Murakami
However, this algorithm does not protect edges (friendships) in a social graph, hence cannot protect user privacy in unattributed graphs.
1 code implementation • 12 Oct 2021 • Hoang-Quoc Nguyen-Son, Seira Hidano, Kazuhide Fukushima, Shinsaku Kiyomoto
In terms of misclassified texts, a classifier handles the texts with both incorrect predictions and adversarial texts, which are generated to fool the classifier, which is called a victim.
1 code implementation • NAACL 2021 • Hoang-Quoc Nguyen-Son, Tran Thao, Seira Hidano, Ishita Gupta, Shinsaku Kiyomoto
However, a round-trip translated text is significantly different from the original text or a translated text using a strange translator.
no code implementations • 30 Nov 2020 • Seira Hidano, Takao Murakami, Yusuke Kawamoto
Transfer learning has been widely studied and gained increasing popularity to improve the accuracy of machine learning models by transferring some knowledge acquired in different training.
no code implementations • 19 Dec 2019 • Hoang-Quoc Nguyen-Son, Tran Phuong Thao, Seira Hidano, Shinsaku Kiyomoto
Attackers create adversarial text to deceive both human perception and the current AI systems to perform malicious purposes such as spam product reviews and fake political posts.
no code implementations • WS 2019 • Hoang-Quoc Nguyen-Son, Tran Phuong Thao, Seira Hidano, Shinsaku Kiyomoto
The existing methods detected a machine-translated text only using the text's intrinsic content, but they are unsuitable for classifying the machine-translated and human-written texts with the same meanings.
no code implementations • 24 Apr 2019 • Hoang-Quoc Nguyen-Son, Tran Phuong Thao, Seira Hidano, Shinsaku Kiyomoto
We have developed a method matching similar words throughout the paragraph and estimating the paragraph-level coherence, that can identify machine-translated text.