no code implementations • 21 Mar 2023 • Binh M. Le, Shahroz Tariq, Simon S. Woo
First, our work carefully analyzes and characterizes these two schools of approaches, both theoretically and empirically, to demonstrate how each approach impacts the robust learning of a classifier.
no code implementations • 29 Nov 2022 • Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed
We assess the effectiveness of proposed attacks against two deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models.
no code implementations • 4 Oct 2022 • Fahim Faisal Niloy, Kishor Kumar Bhaumik, Simon S. Woo
A key assumption in underlying forged region localization is that there remains a difference of feature distribution between untampered and manipulated regions in each forged image sample, irrespective of the forgery type.
1 code implementation • 24 Aug 2022 • Shahroz Tariq, Binh M. Le, Simon S. Woo
To the best of our understanding, we demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks.
no code implementations • 23 Feb 2022 • Donggeun Ko, Sangjun Lee, Jinyong Park, Saebyeol Shin, Donghee Hong, Simon S. Woo
However, none of the suggested deepfakedetection methods assessed the performance of deepfakes withthe facemask during the pandemic crisis after the outbreak of theCovid-19.
no code implementations • 19 Jan 2022 • Chingis Oinar, Binh M. Le, Simon S. Woo
However, the majority of the proposed methods do not consider the class imbalance issue, which is a major challenge in practice for developing deep face recognition models.
1 code implementation • NeurIPS 2021 Track Datasets and Benchmarks 2022 • Jaeju An, Jeongho Kim, Hanbeen Lee, Jinbeom Kim, Junhyung Kang, Saebyeol Shin, Minha Kim, Donghee Hong, Simon S. Woo
Accordingly, detection of these anomalous events is of paramount importance for a number of applications, including but not limited to CCTV surveillance, security, and health care.
Ranked #1 on
Anomaly Detection In Surveillance Videos
on VFP290K
no code implementations • 22 Dec 2021 • Young Oh Bang, Simon S. Woo
Our DA-FDFtNet integrates the pre-trained model with Fine-Tune Transformer, MBblockV3, and a channel attention module to improve the performance and robustness across different types of fake images.
1 code implementation • 15 Dec 2021 • Binh M. Le, Simon S. Woo
The rapid progression of Generative Adversarial Networks (GANs) has raised a concern of their misuse for malicious purposes, especially in creating fake face images.
no code implementations • 7 Dec 2021 • Binh M. Le, Simon S. Woo
In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher's and student's tensors under different views to transfer the teacher tensor's distribution to the student more efficiently.
no code implementations • 29 Sep 2021 • Shahroz Tariq, Simon S. Woo
To the best of our knowledge, we are the first to demonstrate the vulnerabilities of anomaly and intrusion detection systems against adversarial attacks.
no code implementations • 7 Sep 2021 • Hasam Khalid, Minha Kim, Shahroz Tariq, Simon S. Woo
On the other hand, to develop a good deepfake detector that can cope with the recent advancements in deepfake generation, we need to have a detector that can detect deepfakes of multiple modalities, i. e., videos and audios.
2 code implementations • 11 Aug 2021 • Hasam Khalid, Shahroz Tariq, Minha Kim, Simon S. Woo
We generate this dataset using the most popular deepfake generation methods.
2 code implementations • 6 Jul 2021 • Minha Kim, Shahroz Tariq, Simon S. Woo
Over the last few decades, artificial intelligence research has made tremendous strides, but it still heavily relies on fixed datasets in stationary environments.
no code implementations • 28 May 2021 • Minha Kim, Shahroz Tariq, Simon S. Woo
We use FReTAL to perform domain adaptation tasks on new deepfake datasets while minimizing catastrophic forgetting.
1 code implementation • 13 May 2021 • Sangyup Lee, Shahroz Tariq, Junyaup Kim, Simon S. Woo
This motivates us to develop a generalized model to detect different types of deepfakes.
1 code implementation • 1 May 2021 • Shahroz Tariq, Sangyup Lee, Simon S. Woo
Beyond detecting a single type of DF from benchmark deepfake datasets, we focus on developing a generalized approach to detect multiple types of DFs, including deepfakes from unknown generation methods such as DeepFake-in-the-Wild (DFW) videos.
no code implementations • 1 Mar 2021 • Shahroz Tariq, Sowon Jeon, Simon S. Woo
Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0. 02% for targeted and non-targeted attacks, respectively.
1 code implementation • 16 Sep 2020 • Shahroz Tariq, Sangyup Lee, Simon S. Woo
Also, they do not take advantage of the temporal information of the video.
1 code implementation • ICML 2020 • Hyeonseong Jeon, Youngoh Bang, Junyaup Kim, Simon S. Woo
First, we train the teacher model on the source dataset and use it as a starting point for learning the target dataset.
2 code implementations • 5 Jan 2020 • Hyeonseong Jeon, Youngoh Bang, Simon S. Woo
Creating fake images and videos such as "Deepfake" has become much easier these days due to the advancement in Generative Adversarial Networks (GANs).