[en] This paper introduces a novel framework for unsupervised type-agnostic deepfake detection called UNTAG. Existing methods are generally trained in a supervised manner at the classification level, focusing on detecting at most two types of forgeries; thus, limiting their generalization capability across different deepfake types. To handle that, we reformulate the deepfake detection problem as a one-class classification supported by a self-supervision mechanism. Our intuition is that by estimating the distribution of real data in a discriminative feature space, deepfakes can be detected as outliers regardless of their type. UNTAG involves two sequential steps. First, deep representations are learned based on a self-supervised pretext task focusing on manipulated regions. Second, a one-class classifier fitted on authentic image embeddings is used to detect deepfakes. The results reported on several datasets show the effectiveness of UNTAG and the relevance of the proposed new paradigm. The code is publicly available.
Centre de recherche :
- Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Sciences informatiques
Auteur, co-auteur :
MEJRI, Nesryne ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > CVI2
GHORBEL, Enjie ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
UNTAG: Learning Generic Features for Unsupervised Type-Agnostic Deepfake Detection
Date de publication/diffusion :
08 juin 2023
Titre du périodique :
IEEE International Conference on Acoustics, Speech and Signal Processing. Proceedings
ISSN :
1520-6149
Maison d'édition :
IEEE. Institute of Electrical and Electronics Engineers, Grèce
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Niesner, "Faceforensics++: Learning to detect manipulated facial images, " in Proc. ICCV, 2019, pp. 1-11.
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros, "Cnn-generated images are surprisingly easy to spot... for now, " in Proc. CVPR, 2020, pp. 8695-8704.
Alexandros Haliassos, Rodrigo Mira, Stavros Petridis, and Maja Pantic, "Leveraging real talking faces via selfsupervision for robust forgery detection, " arXiv preprint arXiv: 2201. 07131, 2022.
Kaede Shiohara and Toshihiko Yamasaki, "Detecting deepfakes with self-blended images, " in Proc. CVPR, 2022, pp. 18720-18729.
Felix Juefei-Xu, Run Wang, Yihao Huang, Qing Guo, Lei Ma, and Yang Liu, "Countering malicious deepfakes: Survey, battleground, and horizon, " 2021.
Ruben Tolosana, Sergio Romero-Tapiador, Julian Fierrez, and Ruben Vera-Rodriguez, "Deepfakes evolution: Analysis of facial regions and fake detection performance, " in ICPR. Springer, 2021, pp. 442-456.
Nesryne Mejri, Konstantinos Papadopoulos, and Djamila Aouada, "Leveraging high-frequency components for deepfake detection, " in IEEE Workshop on Multimedia Signal Processing, 2021.
Liang Chen, Yong Zhang, Yibing Song, Lingqiao Liu, and Jue Wang, "Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection, " in Proc. CVPR, 2022, pp. 18710-18719.
Hasam Khalid and Simon SWoo, "Oc-fakedect: Classifying deepfakes using one-class variational autoencoder. in 2020 ieee, " in CVPRW, 2020, pp. 2794-2803.
Sheldon Fung, Xuequan Lu, Chao Zhang, and Chang-Tsun Li, "Deepfakeucl: Deepfake detection via unsupervised contrastive learning, " in 2021 IJCNN. IEEE, 2021, pp. 1-8.
Lukas Ruff, Deep one-class learning: A deep learning approach to anomaly detection, Technische Universitaet Berlin (Germany), 2021.
Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, and Tomas Pfister, "Cutpaste: Self-supervised learning for anomaly detection and localization, " in Proc. CVPR, 2021, pp. 9664-9674.
Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo, "Face x-ray for more general face forgery detection, " in Proc. CVPR, 2020, pp. 5001-5010.
Yuezun Li and Siwei Lyu, "Exposing deepfake videos by detecting face warping artifacts, " arXiv preprint arXiv: 1811. 00656, 2018.
Tianchen Zhao, Xiang Xu, Mingze Xu, Hui Ding, Yuanjun Xiong, and Wei Xia, "Learning self-consistency for deepfake detection, " in Proc. CVPR, 2021, pp. 15023-15033.
Yuchen Luo, Yong Zhang, Junchi Yan, and Wei Liu, "Generalizing face forgery detection with highfrequency features, " in Proceedings of the IEEE/CVF CVPR, 2021, pp. 16317-16326.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, "A simple framework for contrastive learning of visual representations, " in ICML. PMLR, 2020, pp. 1597-1607.
Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu, "Celeb-df: A large-scale challenging dataset for deepfake forensics, " in Proc. CVPR, 2020, pp. 3207-3216.
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila, "Analyzing and improving the image quality of stylegan, " in Proc. CVPR, 2020, pp. 8110-8119.
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha, "Stargan v2: Diverse image synthesis for multiple domains, " in Proc. CVPR, 2020, pp. 8188-8197.
Yinan He, Bei Gan, Siyu Chen, Yichun Zhou, Guojun Yin, Luchuan Song, Lu Sheng, Jing Shao, and Ziwei Liu, "Forgerynet: A versatile benchmark for comprehensive forgery analysis, " in Proc. CVPR, 2021, pp. 4360-4369.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, "Deep residual learning for image recognition, " in Proc. CVPR, 2016, pp. 770-778.