References of "Kacem, Anis 50035545"
     in
Bookmark and Share    
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 55 (6 UL)
Full Text
Peer Reviewed
See detail3DBooSTeR: 3D Body Shape and Texture Recovery
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high ... [more ▼]

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high-fidelity digital 3D human representations. However, 3D scanning systems can only capture the 3D human body shape up to some level of defects due to its complexity, including occlusion between bodyparts, varying levels of details, shape deformations and the articulated skeleton. Textured 3D mesh completion is thus important to enhance3D acquisitions. The proposed approach decouples the shape and texture completion into two sequential tasks. The shape is recovered by an encoder-decoder network deforming a template body mesh. The texture is subsequently obtained by projecting the partial texture onto the template mesh before inpainting the corresponding texture map with a novel approach. The approach is validated on the 3DBodyTex.v2 dataset [less ▲]

Detailed reference viewed: 95 (6 UL)
Full Text
Peer Reviewed
See detailDynamic facial expression generation on hilbert hypersphere with conditional wasserstein generative adversarial nets
Otberdout, Naima; Daoudi, Mohamed; Kacem, Anis UL et al

in IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks ... [more ▼]

In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expression generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models. [less ▲]

Detailed reference viewed: 69 (0 UL)
Peer Reviewed
See detailSpace-Time Triplet Loss Network for Dynamic 3D Face Verification
Kacem, Anis UL; Ben Abdessalem, Hamza UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020)

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local ... [more ▼]

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local deformations of the faces with respect to a mean face. Second, the encoded versions of the 3D faces along a sequence are stacked into 2D arrays for temporal modeling. The resulting 2D arrays are then fed to a triplet loss network for dynamic sequence embedding. Finally, the outputs of the triplet loss network are compared using cosine similarity measure for face verification. By projecting the feature maps of the triplet loss network into attention maps on the 3D face sequences, we are able to detect the space-time patterns that contribute most to the pairwise similarity between differ-ent 3D facial expressions of the same person. The evaluation is conducted on the publicly available BU4D dataset which contains dynamic 3D face sequences. Obtained results are promising with respect to baseline methods. [less ▲]

Detailed reference viewed: 77 (6 UL)
Full Text
Peer Reviewed
See detailAutomatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
Otberdout, Naima; Kacem, Anis UL; Daoudi, Mohamed et al

in IEEE Transactions on Neural Networks and Learning Systems (2019)

In this article, we propose a new approach for facial expression recognition (FER) using deep covariance descriptors. The solution is based on the idea of encoding local and global deep convolutional ... [more ▼]

In this article, we propose a new approach for facial expression recognition (FER) using deep covariance descriptors. The solution is based on the idea of encoding local and global deep convolutional neural network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of symmetric positive definite (SPD) matrices. By conducting the classification of static facial expressions using a support vector machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, static facial expression in the wild (SFEW), and acted facial expressions in the wild (AFEW) data sets, we show that both the proposed static and dynamic approaches achieve the state-of-the-art performance for FER outperforming many recent approaches. [less ▲]

Detailed reference viewed: 64 (5 UL)