References of "Aouada, Djamila 50000437"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailVertex Feature Encoding and Hierarchical Temporal Modeling in a Spatio-Temporal Graph Convolutional Network for Action Recognition
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Aouada, Djamila UL et al

in International Conference on Pattern Recognition, Milan 10-15 January 2021 (2021, January)

Detailed reference viewed: 49 (6 UL)
Full Text
Peer Reviewed
See detail3D SPARSE DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 27th IEEE International Conference on Image Processing (ICIP 2020), Abu Dhabi 25-28 October 2020 (2020, October)

Detailed reference viewed: 44 (0 UL)
Full Text
Peer Reviewed
See detailPVDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D
Cherenkova, Kseniya UL; Aouada, Djamila UL; Gusev, Gleb

Scientific Conference (2020, October)

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe ... [more ▼]

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as pro- trusions, missing parts, smoothed edges and holes, inevitably appear in real 3D scans of fabricated CAD objects. Learning the original CAD model construction from a 3D scan requires a ground truth to be available together with the corresponding 3D scan of an object. To solve the gap, we introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes. This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models. The chal- lenges of this new dataset are demonstrated in comparison with other generative point cloud sampling models trained on ShapeNet. The CC3D autoencoder is efficient with respect to memory consumption and training time as compared to state- of-the-art models for 3D data generation. [less ▲]

Detailed reference viewed: 120 (4 UL)
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 23 (0 UL)
Full Text
Peer Reviewed
See detail3DBooSTeR: 3D Body Shape and Texture Recovery
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high ... [more ▼]

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high-fidelity digital 3D human representations. However, 3D scanning systems can only capture the 3D human body shape up to some level of defects due to its complexity, including occlusion between bodyparts, varying levels of details, shape deformations and the articulated skeleton. Textured 3D mesh completion is thus important to enhance3D acquisitions. The proposed approach decouples the shape and texture completion into two sequential tasks. The shape is recovered by an encoder-decoder network deforming a template body mesh. The texture is subsequently obtained by projecting the partial texture onto the template mesh before inpainting the corresponding texture map with a novel approach. The approach is validated on the 3DBodyTex.v2 dataset [less ▲]

Detailed reference viewed: 84 (1 UL)
Full Text
Peer Reviewed
See detailGOING DEEPER WITH NEURAL NETWORKS WITHOUT SKIP CONNECTIONS
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in IEEE International Conference on Image Processing (ICIP 2020), Abu Dhabi, UAE, Oct 25–28, 2020 (2020, May 30)

Detailed reference viewed: 90 (5 UL)
Full Text
Peer Reviewed
See detail3D DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Barcelona 4-8 May 2020 (2020, May)

Detailed reference viewed: 48 (0 UL)
Full Text
Peer Reviewed
See detailStructured Compression of Deep Neural Networks with Debiased Elastic Group LASSO
Oyedotun, Oyebade UL; Aouada, Djamila UL; Ottersten, Björn UL

in IEEE 2020 Winter Conference on Applications of Computer Vision (WACV 20), Aspen, Colorado, US, March 2–5, 2020 (2020, March 01)

Detailed reference viewed: 82 (11 UL)
Full Text
Peer Reviewed
See detailTowards Automatic CAD Modeling from 3D Scan Sketch based Representation
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020), GRAPP (2020, February)

Detailed reference viewed: 114 (11 UL)
Full Text
Peer Reviewed
See detailDeepVI: A Novel Framework for Learning Deep View-Invariant Human Action Representations using a Single RGB Camera
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Oyedotun, Oyebade UL et al

in IEEE International Conference on Automatic Face and Gesture Recognition, Buenos Aires 18-22 May 2020 (2020)

Detailed reference viewed: 77 (17 UL)
Full Text
Peer Reviewed
See detailTowards Generalization of 3D Human Pose Estimation In The Wild
Baptista, Renato UL; Saint, Alexandre Fabian A UL; Al Ismaeil, Kassem UL et al

in International Conference on Pattern Recognition (ICPR) Workshop on 3D Human Understanding (2020)

In this paper, we propose 3DBodyTex.Pose, a dataset that addresses the task of 3D human pose estimation in-the-wild. Generalization to in-the-wild images remains limited due to the lack of adequate ... [more ▼]

In this paper, we propose 3DBodyTex.Pose, a dataset that addresses the task of 3D human pose estimation in-the-wild. Generalization to in-the-wild images remains limited due to the lack of adequate datasets. Existent ones are usually collected in indoor controlled environments where motion capture systems are used to obtain the 3D ground-truth annotations of humans. 3DBodyTex.Pose offers high quality and rich data containing 405 different real subjects in various clothing and poses, and 81k image samples with ground-truth 2D and 3D pose annotations. These images are generated from 200 viewpoints among which 70 challenging extreme viewpoints. This data was created starting from high resolution textured 3D body scans and by incorporating various realistic backgrounds. Retraining a state-of-the-art 3D pose estimation approach using data augmented with 3DBodyTex.Pose showed promising improvement in the overall performance, and a sensible decrease in the per joint position error when testing on challenging viewpoints. The 3DBodyTex.Pose is expected to offer the research community with new possibilities for generalizing 3D pose estimation from monocular in-the-wild images. [less ▲]

Detailed reference viewed: 58 (6 UL)
Full Text
Peer Reviewed
See detailFast Adaptive Reparametrization (FAR) with Application to Human Action Recognition
Ghorbel, Enjie UL; Demisse, Girum UL; Aouada, Djamila UL et al

in IEEE Signal Processing Letters (2020)

In this paper, a fast approach for curve reparametrization, called Fast Adaptive Reparamterization (FAR), is introduced. Instead of computing an optimal matching between two curves such as Dynamic Time ... [more ▼]

In this paper, a fast approach for curve reparametrization, called Fast Adaptive Reparamterization (FAR), is introduced. Instead of computing an optimal matching between two curves such as Dynamic Time Warping (DTW) and elastic distance-based approaches, our method is applied to each curve independently, leading to linear computational complexity. It is based on a simple replacement of the curve parameter by a variable invariant under specific variations of reparametrization. The choice of this variable is heuristically made according to the application of interest. In addition to being fast, the proposed reparametrization can be applied not only to curves observed in Euclidean spaces but also to feature curves living in Riemannian spaces. To validate our approach, we apply it to the scenario of human action recognition using curves living in the Riemannian product Special Euclidean space SE(3) n. The obtained results on three benchmarks for human action recognition (MSRAction3D, Florence3D, and UTKinect) show that our approach competes with state-of-the-art methods in terms of accuracy and computational cost. [less ▲]

Detailed reference viewed: 229 (3 UL)
Peer Reviewed
See detailSpace-Time Triplet Loss Network for Dynamic 3D Face Verification
Kacem, Anis UL; Ben Abdessalem, Hamza UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020)

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local ... [more ▼]

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local deformations of the faces with respect to a mean face. Second, the encoded versions of the 3D faces along a sequence are stacked into 2D arrays for temporal modeling. The resulting 2D arrays are then fed to a triplet loss network for dynamic sequence embedding. Finally, the outputs of the triplet loss network are compared using cosine similarity measure for face verification. By projecting the feature maps of the triplet loss network into attention maps on the 3D face sequences, we are able to detect the space-time patterns that contribute most to the pairwise similarity between differ-ent 3D facial expressions of the same person. The evaluation is conducted on the publicly available BU4D dataset which contains dynamic 3D face sequences. Obtained results are promising with respect to baseline methods. [less ▲]

Detailed reference viewed: 73 (4 UL)
Full Text
Peer Reviewed
See detailTemporal 3D Human Pose Estimation for Action Recognition from Arbitrary Viewpoints
Adel Musallam, Mohamed; Baptista, Renato UL; Al Ismaeil, Kassem UL et al

in 6th Annual Conf. on Computational Science & Computational Intelligence, Las Vegas 5-7 December 2019 (2019, December)

This work presents a new view-invariant action recognition system that is able to classify human actions by using a single RGB camera, including challenging camera viewpoints. Understanding actions from ... [more ▼]

This work presents a new view-invariant action recognition system that is able to classify human actions by using a single RGB camera, including challenging camera viewpoints. Understanding actions from different viewpoints remains an extremely challenging problem, due to depth ambiguities, occlusion, and a large variety of appearances and scenes. Moreover, using only the information from the 2D perspective gives different interpretations for the same action seen from different viewpoints. Our system operates in two subsequent stages. The first stage estimates the 2D human pose using a convolution neural network. In the next stage, the 2D human poses are lifted to 3D human poses, using a temporal convolution neural network that enforces the temporal coherence over the estimated 3D poses. The estimated 3D poses from different viewpoints are then aligned to the same camera reference frame. Finally, we propose to use a temporal convolution network-based classifier for cross-view action recognition. Our results show that we can achieve state of art view-invariant action recognition accuracy even for the challenging viewpoints by only using RGB videos, without pre-training on synthetic or motion capture data. [less ▲]

Detailed reference viewed: 228 (7 UL)
Full Text
Peer Reviewed
See detailBODYFITR: Robust Automatic 3D Human Body Fitting
Saint, Alexandre Fabian A UL; Shabayek, Abd El Rahman UL; Cherenkova, Kseniya UL et al

in Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP) (2019, September 22)

This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications ... [more ▼]

This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications related to healthcare, digital ergonomics, avatar creation and security, especially in industrial contexts for large-scale product design. Existing works either make prior assumptions on the pose, require manual annotation of the data or have difficulty handling complex poses. This work addresses these limitations by providing a novel automatic fitting pipeline with carefully integrated building blocks designed for a systematic and robust approach. It is validated on the 3DBodyTex dataset, with hundreds of high-quality 3D body scans, and shown to outperform prior works in static body pose and shape estimation, qualitatively and quantitatively. The method is also applied to the creation of realistic 3D avatars from the high-quality texture scans of 3DBodyTex, further demonstrating its capabilities. [less ▲]

Detailed reference viewed: 156 (25 UL)