References of "Aouada, Djamila 50000437"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailLeveraging High-Frequency Components for Deepfake Detection
Mejri, Nesryne UL; Papadopoulos, Konstantinos UL; Aouada, Djamila UL

in IEEE Workshop on Multimedia Signal Processing (2021)

In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily ... [more ▼]

In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily dependent on the quality of the forged content and the level of artifacts introduced by the forgery method. To detect these artifacts, it is often necessary to separate and analyze the frequency components of an image. In this context, we propose to utilize the high-frequency components of color images by introducing an end-to-end trainable module that (a) extracts features from high-frequency components and (b) fuses them with the features of the RGB input. The module not only exploits the high-frequency anomalies present in manipulated images but also can be used with most RGB-based deepfake detectors. Experimental results show that the proposed approach boosts the performance of state-of-the-art networks, such as XceptionNet and EfficientNet, on a challenging deepfake dataset. [less ▲]

Detailed reference viewed: 119 (18 UL)
Full Text
Peer Reviewed
See detailLSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural Network
Garcia Sanchez, Albert UL; Mohamed Ali, Mohamed Adel UL; Gaudilliere, Vincent UL et al

in Proceedings of Conference on Computer Vision and Pattern Recognition Workshops (2021, June)

Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active ... [more ▼]

Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active debris removal. Usual approaches for pose estimation involve classical computer vision-based solutions or the application of Deep Learning (DL) techniques. This work explores a novel DL-based methodology, using Convolutional Neural Networks (CNNs), for estimating the pose of uncooperative spacecrafts. Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information. Moreover, bounding boxes of the spacecraft in the image are predicted in a simple, yet efficient manner. The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation, including works which require 3D information as well as works which predict bounding boxes through sophisticated CNNs. [less ▲]

Detailed reference viewed: 173 (15 UL)
See detailFace-GCN: A Graph Convolutional Network for 3D Dynamic Face Identification/Recognition
Papadopoulos, Konstantinos; Kacem, Anis UL; Shabayek, Abdelrahman et al

E-print/Working paper (2021)

Face identification/recognition has significantly advanced over the past years. However, most of the proposed approaches rely on static RGB frames and on neutral facial expressions. This has two ... [more ▼]

Face identification/recognition has significantly advanced over the past years. However, most of the proposed approaches rely on static RGB frames and on neutral facial expressions. This has two disadvantages. First, important facial shape cues are ignored. Second, facial deformations due to expressions can have an impact on the performance of such a method. In this paper, we propose a novel framework for dynamic 3D face identification/recognition based on facial keypoints. Each dynamic sequence of facial expressions is represented as a spatio-temporal graph, which is constructed using 3D facial landmarks. Each graph node contains local shape and texture features that are extracted from its neighborhood. For the classification/identification of faces, a Spatio-temporal Graph Convolutional Network (ST-GCN) is used. Finally, we evaluate our approach on a challenging dynamic 3D facial expression dataset. [less ▲]

Detailed reference viewed: 32 (0 UL)
See detailDisentangled Face Identity Representations for joint 3D Face Recognition and Expression Neutralisation
Kacem, Anis UL; cherenkova, kseniya; Aouada, Djamila UL

E-print/Working paper (2021)

In this paper, we propose a new deep learning-based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled ... [more ▼]

In this paper, we propose a new deep learning-based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled identity representation but also generates a realistic 3D face with a neutral expression while predicting its identity. The proposed network consists of three components; (1) a Graph Convolutional Autoencoder (GCA) to encode the 3D faces into latent representations, (2) a Generative Adversarial Network (GAN) that translates the latent representations of expressive faces into those of neutral faces, (3) and an identity recognition sub-network taking advantage of the neutralized latent representations for 3D face recognition. The whole network is trained in an end-to-end manner. Experiments are conducted on three publicly available datasets showing the effectiveness of the proposed approach. [less ▲]

Detailed reference viewed: 32 (0 UL)
Full Text
See detailExplaining Defect Detection with Saliency Maps
Lorentz, Joe UL; Hartmann, Thomas; Moawad, Assaad et al

E-print/Working paper (2021)

The rising quality and throughput demands of the manufacturing domain require flexible, accurate and explainable computer-vision solutions for defect detection. Deep Neural Networks (DNNs) reach state-of ... [more ▼]

The rising quality and throughput demands of the manufacturing domain require flexible, accurate and explainable computer-vision solutions for defect detection. Deep Neural Networks (DNNs) reach state-of-the-art performance on various computer-vision tasks but wide-spread application in the industrial domain is blocked by the lacking explainability of DNN decisions. A promising, human-readable solution is given by saliency maps, heatmaps highlighting the image areas that influence the classifier’s decision. This work evaluates a selection of saliency methods in the area of industrial quality assurance. To this end we propose the distance pointing game, a new metric to quantify the meaningfulness of saliency maps for defect detection. We provide steps to prepare a publicly available dataset on defective steel plates for the proposed metric. Additionally, the computational complexity is investigated to determine which methods could be integrated on industrial edge devices. Our results show that DeepLift, GradCAM and GradCAM++ outperform the alternatives while the computational cost is feasible for real time applications even on edge devices. This indicates that the respective methods could be used as an additional, autonomous post-classification step to explain decisions taken by intelligent quality assurance systems. [less ▲]

Detailed reference viewed: 46 (1 UL)
Full Text
Peer Reviewed
See detailPVDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D
Cherenkova, Kseniya UL; Aouada, Djamila UL; Gusev, Gleb

Scientific Conference (2020, October)

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe ... [more ▼]

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as pro- trusions, missing parts, smoothed edges and holes, inevitably appear in real 3D scans of fabricated CAD objects. Learning the original CAD model construction from a 3D scan requires a ground truth to be available together with the corresponding 3D scan of an object. To solve the gap, we introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes. This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models. The chal- lenges of this new dataset are demonstrated in comparison with other generative point cloud sampling models trained on ShapeNet. The CC3D autoencoder is efficient with respect to memory consumption and training time as compared to state- of-the-art models for 3D data generation. [less ▲]

Detailed reference viewed: 170 (5 UL)
Full Text
Peer Reviewed
See detail3D SPARSE DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 27th IEEE International Conference on Image Processing (ICIP 2020), Abu Dhabi 25-28 October 2020 (2020, October)

Detailed reference viewed: 73 (0 UL)
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 128 (7 UL)
Full Text
Peer Reviewed
See detail3DBooSTeR: 3D Body Shape and Texture Recovery
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high ... [more ▼]

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high-fidelity digital 3D human representations. However, 3D scanning systems can only capture the 3D human body shape up to some level of defects due to its complexity, including occlusion between bodyparts, varying levels of details, shape deformations and the articulated skeleton. Textured 3D mesh completion is thus important to enhance3D acquisitions. The proposed approach decouples the shape and texture completion into two sequential tasks. The shape is recovered by an encoder-decoder network deforming a template body mesh. The texture is subsequently obtained by projecting the partial texture onto the template mesh before inpainting the corresponding texture map with a novel approach. The approach is validated on the 3DBodyTex.v2 dataset [less ▲]

Detailed reference viewed: 152 (8 UL)
Full Text
Peer Reviewed
See detailGOING DEEPER WITH NEURAL NETWORKS WITHOUT SKIP CONNECTIONS
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in IEEE International Conference on Image Processing (ICIP 2020), Abu Dhabi, UAE, Oct 25–28, 2020 (2020, May 30)

Detailed reference viewed: 128 (7 UL)
Full Text
Peer Reviewed
See detail3D DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Barcelona 4-8 May 2020 (2020, May)

Detailed reference viewed: 71 (0 UL)
Full Text
Peer Reviewed
See detailStructured Compression of Deep Neural Networks with Debiased Elastic Group LASSO
Oyedotun, Oyebade UL; Aouada, Djamila UL; Ottersten, Björn UL

in IEEE 2020 Winter Conference on Applications of Computer Vision (WACV 20), Aspen, Colorado, US, March 2–5, 2020 (2020, March 01)

Detailed reference viewed: 121 (14 UL)
Full Text
Peer Reviewed
See detailTowards Automatic CAD Modeling from 3D Scan Sketch based Representation
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020), GRAPP (2020, February)

Detailed reference viewed: 138 (12 UL)