![]() Mejri, Nesryne ![]() ![]() ![]() in IEEE International Conference on Acoustics, Speech and Signal Processing. Proceedings (2023) This paper introduces a novel framework for unsupervised type-agnostic deepfake detection called UNTAG. Existing methods are generally trained in a supervised manner at the classification level, focusing ... [more ▼] This paper introduces a novel framework for unsupervised type-agnostic deepfake detection called UNTAG. Existing methods are generally trained in a supervised manner at the classification level, focusing on detecting at most two types of forgeries; thus, limiting their generalization capability across different deepfake types. To handle that, we reformulate the deepfake detection problem as a one-class classification supported by a self-supervision mechanism. Our intuition is that by estimating the distribution of real data in a discriminative feature space, deepfakes can be detected as outliers regardless of their type. UNTAG involves two sequential steps. First, deep representations are learned based on a self-supervised pretext task focusing on manipulated regions. Second, a one-class classifier fitted on authentic image embeddings is used to detect deepfakes. The results reported on several datasets show the effectiveness of UNTAG and the relevance of the proposed new paradigm. The code is publicly available. [less ▲] Detailed reference viewed: 61 (7 UL)![]() Mejri, Nesryne ![]() ![]() ![]() in IEEE Workshop on Multimedia Signal Processing (2021) In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily ... [more ▼] In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily dependent on the quality of the forged content and the level of artifacts introduced by the forgery method. To detect these artifacts, it is often necessary to separate and analyze the frequency components of an image. In this context, we propose to utilize the high-frequency components of color images by introducing an end-to-end trainable module that (a) extracts features from high-frequency components and (b) fuses them with the features of the RGB input. The module not only exploits the high-frequency anomalies present in manipulated images but also can be used with most RGB-based deepfake detectors. Experimental results show that the proposed approach boosts the performance of state-of-the-art networks, such as XceptionNet and EfficientNet, on a challenging deepfake dataset. [less ▲] Detailed reference viewed: 239 (71 UL)![]() Mejri, Nesryne ![]() Bachelor/master dissertation (2021) Over the last years, very deceitful deepfakes applied to human visuals appeared on theInternet. Given their impressive visual quality, they are nowadays considered a potentialthreat for both individuals ... [more ▼] Over the last years, very deceitful deepfakes applied to human visuals appeared on theInternet. Given their impressive visual quality, they are nowadays considered a potentialthreat for both individuals and organizations. Hence, researchers started investigatingthe flaws of deepfakes to develop automated tools capable of detecting forged content. Asa result, a wide range of deepfake detection methods has been introduced. In particular,deep learning based-approaches have shown impressive performance. Nevertheless, thesemethods are still not sufficiently robust as they usually consider only one type of artifact,either in the spatial or the frequency domain. In this context, this thesis proposes toleverage the high-frequency components extracted from color images jointly with theoriginal color information to detect unusual traces. It introduces an end-to-end trainablemodule that (a) extracts features from precomputed high-frequency components and (b)fuses them with RGB features. The deepfake detection framework not only exploitsthe high-frequency anomalies present in manipulated images but can also be integratedwith the majority of RGB-based deepfake detectors. Experimental results show thatthe proposed approach improves the performance of state-of-the-art networks, such asXceptionNet and EfficientNet, on a challenging deepfake dataset called Celeb-DF. [less ▲] Detailed reference viewed: 184 (23 UL)![]() Mejri, Nesryne ![]() ![]() in Sustainable Computing: Informatics and Systems (2020) The scheduling of parallel tasks is a topic that has received a lot of attention in recent years, in particular, due to the development of larger HPC clusters. It is regarded as an interesting problem ... [more ▼] The scheduling of parallel tasks is a topic that has received a lot of attention in recent years, in particular, due to the development of larger HPC clusters. It is regarded as an interesting problem because when combined with performant hardware, it ensures fast and efficient computing. However, it comes with a cost. The growing number of HPC clusters entails a greater global energy consumption which has a clear negative environmental impact. A green solution is thus required to find a compromise between energy-saving and high-performance computing within those clusters. In this paper, we evaluate the use of malleable jobs and idle servers powering off as a way to reduce both jobs mean stretch time and servers average power consumption. Malleable jobs have the particularity that the number of allocated servers can be changed during runtime. We present an energy-aware greedy algorithm with Particle Swarm Optimised parameters as a possible solution to schedule malleable jobs. An in-depth evaluation of the approach is then outlined using results from a simulator that was developed to handle malleable jobs. The results show that the use of malleable tasks can lead to an improved performance in terms of power consumption. We believe that our results open the door for further investigations on using malleable jobs models coupled with the energy-saving aspect. [less ▲] Detailed reference viewed: 85 (15 UL) |
||