References of "Cherenkova, Kseniya 50031848"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailPVDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D
Cherenkova, Kseniya UL; Aouada, Djamila UL; Gusev, Gleb

Scientific Conference (2020, October)

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe ... [more ▼]

We propose a Point-Voxel DeConvolution (PVDeConv) mod- ule for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as pro- trusions, missing parts, smoothed edges and holes, inevitably appear in real 3D scans of fabricated CAD objects. Learning the original CAD model construction from a 3D scan requires a ground truth to be available together with the corresponding 3D scan of an object. To solve the gap, we introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes. This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models. The chal- lenges of this new dataset are demonstrated in comparison with other generative point cloud sampling models trained on ShapeNet. The CC3D autoencoder is efficient with respect to memory consumption and training time as compared to state- of-the-art models for 3D data generation. [less ▲]

Detailed reference viewed: 120 (4 UL)
Full Text
Peer Reviewed
See detail3D SPARSE DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 27th IEEE International Conference on Image Processing (ICIP 2020), Abu Dhabi 25-28 October 2020 (2020, October)

Detailed reference viewed: 43 (0 UL)
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 23 (0 UL)
Full Text
Peer Reviewed
See detail3DBooSTeR: 3D Body Shape and Texture Recovery
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high ... [more ▼]

We propose 3DBooSTeR, a novel method to recover a textured 3D body mesh from a textured partial 3D scan. With the advent of virtual and augmented reality, there is a demand for creating realistic and high-fidelity digital 3D human representations. However, 3D scanning systems can only capture the 3D human body shape up to some level of defects due to its complexity, including occlusion between bodyparts, varying levels of details, shape deformations and the articulated skeleton. Textured 3D mesh completion is thus important to enhance3D acquisitions. The proposed approach decouples the shape and texture completion into two sequential tasks. The shape is recovered by an encoder-decoder network deforming a template body mesh. The texture is subsequently obtained by projecting the partial texture onto the template mesh before inpainting the corresponding texture map with a novel approach. The approach is validated on the 3DBodyTex.v2 dataset [less ▲]

Detailed reference viewed: 83 (1 UL)
Full Text
Peer Reviewed
See detail3D DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Barcelona 4-8 May 2020 (2020, May)

Detailed reference viewed: 47 (0 UL)
Full Text
Peer Reviewed
See detailTowards Automatic CAD Modeling from 3D Scan Sketch based Representation
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Cherenkova, Kseniya UL et al

in Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020), GRAPP (2020, February)

Detailed reference viewed: 112 (9 UL)
Peer Reviewed
See detailSpace-Time Triplet Loss Network for Dynamic 3D Face Verification
Kacem, Anis UL; Ben Abdessalem, Hamza UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020)

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local ... [more ▼]

In this paper, we propose a new approach for 3D dynamic face verification exploiting 3D facial deformations. First, 3D faces are encoded into low-dimensional representations describing the local deformations of the faces with respect to a mean face. Second, the encoded versions of the 3D faces along a sequence are stacked into 2D arrays for temporal modeling. The resulting 2D arrays are then fed to a triplet loss network for dynamic sequence embedding. Finally, the outputs of the triplet loss network are compared using cosine similarity measure for face verification. By projecting the feature maps of the triplet loss network into attention maps on the 3D face sequences, we are able to detect the space-time patterns that contribute most to the pairwise similarity between differ-ent 3D facial expressions of the same person. The evaluation is conducted on the publicly available BU4D dataset which contains dynamic 3D face sequences. Obtained results are promising with respect to baseline methods. [less ▲]

Detailed reference viewed: 70 (4 UL)
Full Text
Peer Reviewed
See detailBODYFITR: Robust Automatic 3D Human Body Fitting
Saint, Alexandre Fabian A UL; Shabayek, Abd El Rahman UL; Cherenkova, Kseniya UL et al

in Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP) (2019, September 22)

This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications ... [more ▼]

This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications related to healthcare, digital ergonomics, avatar creation and security, especially in industrial contexts for large-scale product design. Existing works either make prior assumptions on the pose, require manual annotation of the data or have difficulty handling complex poses. This work addresses these limitations by providing a novel automatic fitting pipeline with carefully integrated building blocks designed for a systematic and robust approach. It is validated on the 3DBodyTex dataset, with hundreds of high-quality 3D body scans, and shown to outperform prior works in static body pose and shape estimation, qualitatively and quantitatively. The method is also applied to the creation of realistic 3D avatars from the high-quality texture scans of 3DBodyTex, further demonstrating its capabilities. [less ▲]

Detailed reference viewed: 154 (25 UL)
Full Text
Peer Reviewed
See detail3DBodyTex: Textured 3D Body Dataset
Saint, Alexandre Fabian A UL; Ahmed, Eman UL; Shabayek, Abd El Rahman UL et al

in 2018 Sixth International Conference on 3D Vision (3DV 2018) (2018)

In this paper, a dataset, named 3DBodyTex, of static 3D body scans with high-quality texture information is presented along with a fully automatic method for body model fitting to a 3D scan. 3D shape ... [more ▼]

In this paper, a dataset, named 3DBodyTex, of static 3D body scans with high-quality texture information is presented along with a fully automatic method for body model fitting to a 3D scan. 3D shape modelling is a fundamental area of computer vision that has a wide range of applications in the industry. It is becoming even more important as 3D sensing technologies are entering consumer devices such as smartphones. As the main output of these sensors is the 3D shape, many methods rely on this information alone. The 3D shape information is, however, very high dimensional and leads to models that must handle many degrees of freedom from limited information. Coupling texture and 3D shape alleviates this burden, as the texture of 3D objects is complementary to their shape. Unfortunately, high-quality texture content is lacking from commonly available datasets, and in particular in datasets of 3D body scans. The proposed 3DBodyTex dataset aims to fill this gap with hundreds of high-quality 3D body scans with high-resolution texture. Moreover, a novel fully automatic pipeline to fit a body model to a 3D scan is proposed. It includes a robust 3D landmark estimator that takes advantage of the high-resolution texture of 3DBodyTex. The pipeline is applied to the scans, and the results are reported and discussed, showcasing the diversity of the features in the dataset. [less ▲]

Detailed reference viewed: 928 (65 UL)