Reference : Dynamic facial expression generation on hilbert hypersphere with conditional wasserst...
Scientific journals : Article
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/44575
Dynamic facial expression generation on hilbert hypersphere with conditional wasserstein generative adversarial nets
English
Otberdout, Naima [Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco]
Daoudi, Mohamed [IMT Lille-Douai, Univer-sity of Lille, CNRS, UMR 9189 CRIStAL, Lille, France]
Kacem, Anis mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2 >]
Ballihi, Lahoucine [Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco]
Berretti, Stefano [Department of Information Engineering, Universityof Florence, Florence, Italy]
Apr-2020
IEEE Transactions on Pattern Analysis and Machine Intelligence
Institute of Electrical and Electronics Engineers
Yes (verified by ORBilu)
International
0162-8828
United States
[en] Facial expression generation ; Conditional manifold-valued Wasserstein Generative Adversarial Networks ; Facial Landmarks
[en] In this work, we propose a novel approach for generating videos of the six basic facial expressions given a neutral face image. We propose to exploit the face geometry by modeling the facial landmarks motion as curves encoded as points on a hypersphere. By proposing a conditional version of manifold-valued Wasserstein generative adversarial network (GAN) for motion generation on the hypersphere, we learn the distribution of facial expression dynamics of different classes, from which we synthesize new facial expression motions. The resulting motions can be transformed to sequences of landmarks and then to images sequences by editing the texture information using another conditional Generative Adversarial Network. To the best of our knowledge, this is the first work that explores manifold-valued representations with GAN to address the problem of dynamic facial expression generation. We evaluate our proposed approach both quantitatively and qualitatively on two public datasets; Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the effectiveness of our approach in generating realistic videos with continuous motion, realistic appearance and identity preservation. We also show the efficiency of our framework for dynamic facial expression generation, dynamic facial expression transfer and data augmentation for training improved emotion recognition models.
http://hdl.handle.net/10993/44575

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Limited access
1907.10087.pdfAuthor postprint11.66 MBRequest a copy

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.