References of "Kacem, Anis 50035545"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailYou Can Dance! Generating Music-Conditioned Dances on Real 3D Scans.
Dupont, Elona UL; Singh, Inder Pal UL; Fuentes, Laura et al

Scientific Conference (2023)

Detailed reference viewed: 52 (1 UL)
Full Text
Peer Reviewed
See detailDisentangled Face Identity Representationsfor Joint 3D Face Recognition and Neutralisation
Kacem, Anis UL; cherenkova, kseniya; Aouada, Djamila UL

in 2022 8th International Conference on Virtual Reality (2022)

In this paper, we propose a new deep learning based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled ... [more ▼]

In this paper, we propose a new deep learning based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled identity representation, but also generates a realistic 3D face with a neutral expression while predicting its identity. The proposed network consists of three components; (1) a Graph Convolutional Autoencoder (GCA) to encode the 3D faces into latent representations, (2) a Generative Adversarial Network (GAN) that translates the latent representations of expressive faces into those of neutral faces, (3) and an identity recognition sub-network taking advantage of the neutralized latent representations for 3D face recognition. The whole network is trained in an end-to-end manner. Experiments are conducted on three publicly available datasets showing the effectiveness of the proposed approach. [less ▲]

Detailed reference viewed: 81 (15 UL)
Full Text
Peer Reviewed
See detailFace-GCN: A Graph Convolutional Network for 3D Dynamic Face Recognition
Papadopoulos, Konstantinos; Kacem, Anis UL; Shabayek, Abdelrahman et al

in 2022 8th International Conference on Virtual Reality (2022)

Face recognition has significantly advanced over the past years. However, most of the proposed approaches rely on static RGB frames and on neutral facial expressions. This has two disadvantages. First ... [more ▼]

Face recognition has significantly advanced over the past years. However, most of the proposed approaches rely on static RGB frames and on neutral facial expressions. This has two disadvantages. First, important facial shape cues are ignored. Second, facial deformations due to expressions can have an impact in the performance of such a method. In this paper, we propose a novel framework for dynamic 3D face recognition based on facial keypoints. Each dynamic sequence of facial expressions is represented as a spatio-temporal graph, which is constructed using 3D facial landmarks. Each graph node contains local shape and texture features that are extracted from its neighborhood. For the classification of face videos, a Spatio-temporal Graph Convolutional Network (ST-GCN) is used. Finally, we evaluate our approach on a challenging dynamic 3D facial expression dataset. [less ▲]

Detailed reference viewed: 70 (6 UL)
Full Text
Peer Reviewed
See detailTSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network
Karadeniz, Ahmet Serdar UL; Ali, Sk Aziz UL; Kacem, Anis UL et al

in Karadeniz, Ahmet Serdar; Ali, Sk Aziz; Kacem, Anis (Eds.) et al TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network (2022)

Reconstructing 3D human body shapes from 3D partial textured scans remains a fundamental task for many computer vision and graphics applications – e.g., body animation, and virtual dressing. We propose a ... [more ▼]

Reconstructing 3D human body shapes from 3D partial textured scans remains a fundamental task for many computer vision and graphics applications – e.g., body animation, and virtual dressing. We propose a new neural network architecture for 3D body shape and highresolution texture completion – TSCom-Net – that can reconstruct the full geometry from mid-level to high-level partial input scans. We decompose the overall reconstruction task into two stages – first, a joint implicit learning network (SCom-Net and TCom-Net) that takes a voxelized scan and its occupancy grid as input to reconstruct the full body shape and predict vertex textures. Second, a high-resolution texture completion network, that utilizes the predicted coarse vertex textures to inpaint the missing parts of the partial ‘texture atlas’. A Thorough experimental evaluation on 3DBodyTex.V2 dataset shows that our method achieves competitive results with respect to the state-of-the-art while generalizing to different types and levels of partial shapes. The proposed method has also ranked second in the track1 of SHApe Recovery from Partial textured 3D scans (SHARP [37 , 2]) 2022 1 challenge1. [less ▲]

Detailed reference viewed: 74 (16 UL)
Full Text
Peer Reviewed
See detail3DBooSTeR: 3D Body Shape and Texture Recovery
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high ... [more ▼]

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high-fidelity digital 3D human representations. However, 3D scanning systems can only capture the 3D human body shape up to some level of defects due to its complexity, including occlusion between bodyparts, varying levels of details, shape deformations and the articulated skeleton. Textured 3D mesh completion is thus important to enhance3D acquisitions. The proposed approach decouples the shape and texture completion into two sequential tasks. The shape is recovered by an encoder-decoder network deforming a template body mesh. The texture is subsequently obtained by projecting the partial texture onto the template mesh before inpainting the corresponding texture map with a novel approach. The approach is validated on the 3DBodyTex.v2 dataset [less ▲]

Detailed reference viewed: 198 (8 UL)
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 174 (8 UL)
Full Text
Peer Reviewed
See detailDynamic facial expression generation on hilbert hypersphere with conditional wasserstein generative adversarial nets
Otberdout, Naima; Daoudi, Mohamed; Kacem, Anis UL et al

in IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks ... [more ▼]

In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expression generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models. [less ▲]

Detailed reference viewed: 124 (7 UL)
Full Text
Peer Reviewed
See detailSpace-Time Triplet Loss Network for Dynamic 3D Face Verification
Kacem, Anis UL; Ben Abdessalem, Hamza UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020)

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local ... [more ▼]

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local deformations of the faces with respect to a mean face. Second, the encoded versions of the 3D faces along a sequence are stacked into 2D arrays for temporal modeling. The resulting 2D arrays are then fed to a triplet loss network for dynamic sequence embedding. Finally, the outputs of the triplet loss network are compared using cosine similarity measure for face verification. By projecting the feature maps of the triplet loss network into attention maps on the 3D face sequences, we are able to detect the space-time patterns that contribute most to the pairwise similarity between differ-ent 3D facial expressions of the same person. The evaluation is conducted on the publicly available BU4D dataset which contains dynamic 3D face sequences. Obtained results are promising with respect to baseline methods. [less ▲]

Detailed reference viewed: 152 (26 UL)
Full Text
Peer Reviewed
See detailAutomatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
Otberdout, Naima; Kacem, Anis UL; Daoudi, Mohamed et al

in IEEE Transactions on Neural Networks and Learning Systems (2019)

In this article, we propose a new approach for facial expression recognition (FER) using deep covariance descriptors. The solution is based on the idea of encoding local and global deep convolutional ... [more ▼]

In this article, we propose a new approach for facial expression recognition (FER) using deep covariance descriptors. The solution is based on the idea of encoding local and global deep convolutional neural network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of symmetric positive definite (SPD) matrices. By conducting the classification of static facial expressions using a support vector machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, static facial expression in the wild (SFEW), and acted facial expressions in the wild (AFEW) data sets, we show that both the proposed static and dynamic approaches achieve the state-of-the-art performance for FER outperforming many recent approaches. [less ▲]

Detailed reference viewed: 96 (5 UL)