Deepfakes; Multi-label Image Classification; Deep Learning; Machine Learning
Abstract :
[en] In this paper, we investigate the suitability of current multi-label classification approaches for deepfake detection. With the recent advances in generative modeling, new deepfake detection methods have been proposed. Nevertheless, they mostly formulate this topic as a binary classification problem, resulting in poor explainability capabilities. Indeed, a forged image might be induced by multi-step manipulations with different properties.
For a better interpretability of the results, recognizing the nature of these stacked manipulations is highly relevant. For that reason, we propose to model deepfake detection as a multi-label classification task, where each label corresponds to a specific kind of manipulation. In this context, state-of-the-art multi-label image classification methods are considered. Extensive experiments are performed to assess the practical use case of deepfake detection.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Computer science
Author, co-author :
SINGH, Inder Pal ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
MEJRI, Nesryne ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > CVI2
NGUYEN, van Dat ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Ghorbel, Enjie
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
External co-authors :
no
Language :
English
Title :
Multi-label Deepfake Classification
Publication date :
27 September 2023
Event name :
The IEEE 25th International Workshop on Multimedia Workshop
Event organizer :
Multimedia Signal Processing Technical Committee of the IEEE Signal Processing Society (SPS)
FNR - Fonds National de la Recherche Post Luxembourg
Commentary :
This work is supported by the Luxembourg National Research Fund, under the BRIDGES2021/IS/16353350/FaKeDeTeR and UNFAKE, ref.16763798 projects, and by Post Luxembourg.
Mejri, N., Ghorbel, E., and Aouada, D. (2023). UNTAG: Learning Generic Features for Unsupervised Type-Agnostic Deepfake Detection. In IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1-5).
Rössler A., Cozzolino D., Verdoliva L., Riess C., Thies J. and Niesner M. (2019). FaceForensics++: Learning to Detect Manipulated Facial Images. In Proceedings of IEEE/CVF International Conference on Computer Vision (pp. 1-11)
Li, L., Bao, J., Zhang, T., Yang, H., Chen, D., Wen, F., and Guo, B. (2020). Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5001-5010).
Mejri, N., Papadopoulos, K., and Aouada, D. (2021). Leveraging High-Frequency Components for Deepfake Detection. In 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (pp. 1-6).
Shiohara K., and Yamasaki T. (2022). Detecting Deepfakes with Self-Blended Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (pp. 18720-18729).
Chen L., Zhang Y., Song Y. Liu L. and Wang J. (2022). Self-supervised Learning of Adversarial Examples: Towards Good Generalizations for DeepFake Detections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
[7] Zhao, T., Xu, X., Xu, M., Ding, H., Xiong, Y., and Xia, W. (2021). Learning self-consistency for deepfake detection. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 15023-15033).
Wang, Y., Yu, K., Chen, C., Hu, X., and Peng, S. (2023). Dynamic Graph Learning With Content-Guided Spatial-Frequency Relation Reasoning for Deepfake Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7278-7287).
Rui S., Tianxing W. and Ziwei L. (2022). Detecting and Recovering Sequential DeepFake Manipulation. In Proceedings of European Conference on Computer Vision (pp. 712-728).
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Ridnik, T., Lawen, H., Noy, A., Ben Baruch, E., Sharir, G., and Friedman, I. (2021). "Tresnet: High performance gpu-dedicated architecture." In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1400-1409).
Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q., 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
Ridnik, T., Ben-Baruch, E., Zamir, N., Noy, A., Friedman, I., Protter, M. and Zelnik-Manor, L., 2021. Asymmetric loss for multi-label classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 82-91).
Zhu, F., Li, H., Ouyang, W., Yu, N., and Wang, X. (2017). Learning spatial regularization with image-level supervisions for multi-label image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5513-5522).
Chen, Z.M., Wei, X.S., Wang, P. and Guo, Y., 2019. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5177-5186).
Pennington, J., Socher, R. and Manning, C.D., 2014, October. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
Singh, I.P., Oyedotun, O., Ghorbel, E. and Aouada, D., 2022. IMLGCN: Improved Multi-Label Graph Convolutional Network for Efficient yet Precise Image Classification. In AAAI-22 Workshop Program-Deep Learning on Graphs: Methods and Applications.
[19] Singh, I. P., Ghorbel, E., Oyedotun, O., and Aouada, D. (2022). Multi label image classification using adaptive graph convolutional networks (ML-AGCN). In IEEE International Conference on Image Processing.
[20] MultiLabelSoftMarginLoss-PyTorch 2.0 documentation. (n.d.). Pytorch.org. Retrieved May 28, 2023, from https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html