Impact of Benign Modifications on Discriminative Performance of Deepfake Detectors

14 Nov 2021  ·  Yuhang Lu, Evgeniy Upenik, Touradj Ebrahimi ·

Deepfakes are becoming increasingly popular in both good faith applications such as in entertainment and maliciously intended manipulations such as in image and video forgery. Primarily motivated by the latter, a large number of deepfake detectors have been proposed recently in order to identify such content. While the performance of such detectors still need further improvements, they are often assessed in simple if not trivial scenarios. In particular, the impact of benign processing operations such as transcoding, denoising, resizing and enhancement are not sufficiently studied. This paper proposes a more rigorous and systematic framework to assess the performance of deepfake detectors in more realistic situations. It quantitatively measures how and to which extent each benign processing approach impacts a state-of-the-art deepfake detection method. By illustrating it in a popular deepfake detector, our benchmark proposes a framework to assess robustness of detectors and provides valuable insights to design more efficient deepfake detectors.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here