no code implementations • 27 Sep 2023 • Lukas Strack, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, Isao Echizen
To address this problem, we are the first to investigate defense strategies against adversarial patch attacks on infrared detection, especially human detection.
no code implementations • 4 Sep 2023 • AprilPyone MaungMaung, Isao Echizen, Hitoshi Kiya
In this paper, we propose a new key-based defense focusing on both efficiency and robustness.
1 code implementation • 2 Jun 2023 • Hoang-Quoc Nguyen-Son, Seira Hidano, Kazuhide Fukushima, Shinsaku Kiyomoto, Isao Echizen
Specifically, VoteTRANS detects adversarial text by comparing the hard labels of input text and its transformation.
no code implementations • 21 Dec 2022 • Harkeerat Kaur, Rishabh Shukla, Isao Echizen, Pritee Khanna
These proxy biometrics can be generated from original ones only with the help of a user-specific key.
no code implementations • 7 Dec 2022 • Yuyang Sun, Zhiyong Zhang, Isao Echizen, Huy H. Nguyen, Changzhen Qiu, Lu Sun
We introduce a method for detecting manipulated videos that is based on the trajectory of the facial region displacement.
no code implementations • 18 Oct 2022 • Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, Isao Echizen
The results raise the alarm about the robustness of such systems and suggest that master vein attacks should be considered an important security measure.
no code implementations • 19 Sep 2022 • Ryota Iijima, Miki Tanaka, Isao Echizen, Hitoshi Kiya
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs).
no code implementations • 7 Sep 2022 • Miki Tanaka, Isao Echizen, Hitoshi Kiya
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs).
no code implementations • 28 Jun 2022 • Trung-Nghia Le, Ta Gu, Huy H. Nguyen, Isao Echizen
We have investigated a new application of adversarial examples, namely location privacy protection against landmark recognition systems.
1 code implementation • NAACL 2022 • Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen
We present EASE, a novel method for learning sentence embeddings via contrastive learning between sentences and their related entities.
no code implementations • 13 Feb 2022 • Trung-Nghia Le, Huy H Nguyen, Junichi Yamagishi, Isao Echizen
Recent advances in deep learning have led to substantial improvements in deepfake generation, resulting in fake media with a more realistic appearance.
no code implementations • 5 Feb 2022 • Ching-Chun Chang, Xu Wang, Sisheng Chen, Hitoshi Kiya, Isao Echizen
The core strength of neural networks is the ability to render accurate predictions for a bewildering variety of data.
no code implementations • 29 Dec 2021 • Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H. Nguyen, Isao Echizen
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions.
1 code implementation • 12 Dec 2021 • Minh-Quan Le, Trung-Nghia Le, Tam V. Nguyen, Isao Echizen, Minh-Triet Tran
The dataset is available at https://doi. org/10. 5281/zenodo. 8208877 .
no code implementations • 25 Nov 2021 • Khanh-Duy Nguyen, Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, Isao Echizen
However, there is still a lack of comprehensive research on both methodologies and datasets.
no code implementations • 15 Oct 2021 • Sosuke Nishikawa, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen
We present a multilingual bag-of-entities model that effectively boosts the performance of zero-shot cross-lingual text classification by extending a multilingual pre-trained language model (e. g., M-BERT).
no code implementations • 8 Sep 2021 • Huy H. Nguyen, Sébastien Marcel, Junichi Yamagishi, Isao Echizen
Previous work has proven the existence of master faces, i. e., faces that match multiple enrolled templates in face recognition systems, and their existence extends the ability of presentation attacks.
no code implementations • ICCV 2021 • Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
To promote these new tasks, we have created the first large-scale dataset posing a high level of challenges that is designed with face-wise rich annotations explicitly for face forgery detection and segmentation, namely OpenForensics.
no code implementations • 13 Jun 2021 • Ching-Chun Chang, Xu Wang, Sisheng Chen, Isao Echizen, Victor Sanchez, Chang-Tsun Li
Given that reversibility is governed independently by the coding module, we narrow our focus to the incorporation of neural networks into the analytics module, which serves the purpose of predicting pixel intensities and a pivotal role in determining capacity and imperceptibility.
1 code implementation • 17 Apr 2021 • Marc Treu, Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation networks.
no code implementations • EMNLP (NLP+CSS) 2020 • Saurabh Gupta, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Recent advancements in natural language generation has raised serious concerns.
no code implementations • 15 Jun 2020 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen, Sébastien Marcel
In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases.
no code implementations • 11 Dec 2019 • Huy H. Nguyen, Minoru Kuribayashi, Junichi Yamagishi, Isao Echizen
Deep neural networks (DNNs) have achieved excellent performance on several tasks and have been widely applied in both academia and industry.
no code implementations • 2 Nov 2019 • Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
We experimentally demonstrated the existence of individual adversarial perturbations (IAPs) and universal adversarial perturbations (UAPs) that can lead a well-performed FFM to misbehave.
no code implementations • 2 Nov 2019 • Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
The rapid development of deep learning techniques has created new challenges in identifying the origin of digital images because generative adversarial networks and variational autoencoders can create plausible digital images whose contents are not present in natural scenes.
2 code implementations • 28 Oct 2019 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
In this paper, we introduce a capsule network that can detect various kinds of attacks, from presentation attacks using printed images and replayed videos to attacks using fake videos created using deep learning.
no code implementations • 22 Jul 2019 • David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Advanced neural language models (NLMs) are widely used in sequence generation tasks because they are able to produce fluent and meaningful sentences.
1 code implementation • 17 Jun 2019 • Huy H. Nguyen, Fuming Fang, Junichi Yamagishi, Isao Echizen
The output of one branch of the decoder is used for segmenting the manipulated regions while that of the other branch is used for reconstructing the input, which helps improve overall performance.
no code implementations • 30 May 2019 • Fuming Fang, Xin Wang, Junichi Yamagishi, Isao Echizen, Massimiliano Todisco, Nicholas Evans, Jean-Francois Bonastre
One solution to mitigate these concerns involves the concealing of speaker identities before the sharing of speech data.
no code implementations • PACLIC 2018 • Hoang-Quoc Nguyen-Son, Ngoc-Dung T. Tieu, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
We have developed a method for extracting the coherence features from a paragraph by matching similar words in its sentences.
no code implementations • 29 Oct 2018 • Fuming Fang, Xin Wang, Junichi Yamagishi, Isao Echizen
Transforming the facial and acoustic features together makes it possible for the converted voice and facial expressions to be highly correlated and for the generated target speaker to appear and sound natural.
3 code implementations • 26 Oct 2018 • Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Recent advances in media generation techniques have made it easier for attackers to create forged images and videos.
6 code implementations • 4 Sep 2018 • Darius Afchar, Vincent Nozick, Junichi Yamagishi, Isao Echizen
This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face.
no code implementations • 12 Apr 2018 • Huy H. Nguyen, Ngoc-Dung T. Tieu, Hoang-Quoc Nguyen-Son, Junichi Yamagishi, Isao Echizen
Making computer-generated (CG) images more difficult to detect is an interesting problem in computer graphics and security.
no code implementations • 2 Apr 2018 • Fuming Fang, Junichi Yamagishi, Isao Echizen, Jaime Lorenzo-Trueba
Although voice conversion (VC) algorithms have achieved remarkable success along with the development of machine learning, superior performance is still difficult to achieve when using nonparallel data.
no code implementations • 2 Mar 2018 • Jaime Lorenzo-Trueba, Fuming Fang, Xin Wang, Isao Echizen, Junichi Yamagishi, Tomi Kinnunen
Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice spoofing attacks are becoming more and more capable, and error rates close to zero are being reached for the ASVspoof2015 database.