59 papers with code • 4 benchmarks • 12 datasets
DeepFakes involves videos, often obscene, in which a face can be swapped with someone else’s using neural networks. DeepFakes are a general public concern, thus it's important to develop methods to detect them.
Description source: DeepFakes: a New Threat to Face Recognition? Assessment and Detection
LibrariesUse these libraries to find DeepFake Detection models and implementations
In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size.
This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face.
AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information.
The free access to large-scale public databases, together with the fast progress of deep learning techniques, in particular Generative Adversarial Networks, have led to the generation of very realistic fake content with its corresponding implications towards society in this era of fake news.
In this paper, we tackle the problem of face manipulation detection in video sequences targeting modern facial manipulation techniques.
Traditionally, Convolutional Neural Networks (CNNs) have been used to perform video deepfake detection, with the best results obtained using methods based on EfficientNet B7.