References of "Aouada, Djamila 50000437"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailHighway Network Block with Gates Constraints for Training Very Deep Networks
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in 2018 IEEE International Conference on Computer Vision and Pattern Recognition Workshop, June 18-22, 2018 (2018, June 19)

In this paper, we propose to reformulate the learning of the highway network block to realize both early optimization and improved generalization of very deep networks while preserving the network depth ... [more ▼]

In this paper, we propose to reformulate the learning of the highway network block to realize both early optimization and improved generalization of very deep networks while preserving the network depth. Gate constraints are duly employed to improve optimization, latent representations and parameterization usage in order to efficiently learn hierarchical feature transformations which are crucial for the success of any deep network. One of the earliest very deep models with over 30 layers that was successfully trained relied on highway network blocks. Although, highway blocks suffice for alleviating optimization problem via improved information flow, we show for the first time that further in training such highway blocks may result into learning mostly untransformed features and therefore a reduction in the effective depth of the model; this could negatively impact model generalization performance. Using the proposed approach, 15-layer and 20-layer models are successfully trained with one gate and a 32-layer model using three gates. This leads to a drastic reduction of model parameters as compared to the original highway network. Extensive experiments on CIFAR-10, CIFAR-100, Fashion-MNIST and USPS datasets are performed to validate the effectiveness of the proposed approach. Particularly, we outperform the original highway network and many state-ofthe- art results. To the best our knowledge, on the Fashion-MNIST and USPS datasets, the achieved results are the best reported in literature. [less ▲]

Detailed reference viewed: 45 (3 UL)
Peer Reviewed
See detailPose Encoding for Robust Skeleton-Based Action Recognition
Demisse, Girum UL; Papadopoulos, Konstantinos UL; Aouada, Djamila UL et al

in CVPRW: Visual Understanding of Humans in Crowd Scene, Salt Lake City, Utah, June 18-22, 2018 (2018, June 18)

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for ... [more ▼]

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for filtering linear noise transformations, but neglect to address potential nonlinear transformations. In this paper, we present an unsupervised learning approach for estimating nonlinear noise transformations in pose estimates. Our approach starts by decoupling linear and nonlinear noise transformations. While the linear transformations are modelled explicitly the nonlinear transformations are learned from data. Subsequently, we use an autoencoder with L2-norm reconstruction error and show that it indeed does capture nonlinear noise transformations, and recover a denoised pose estimate which in turn improves performance significantly. We validate our approach on a publicly available dataset, NW-UCLA. [less ▲]

Detailed reference viewed: 46 (16 UL)
Full Text
Peer Reviewed
See detailIMPROVING THE CAPACITY OF VERY DEEP NETWORKS WITH MAXOUT UNITS
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (2018, February 21)

Deep neural networks inherently have large representational power for approximating complex target functions. However, models based on rectified linear units can suffer reduction in representation ... [more ▼]

Deep neural networks inherently have large representational power for approximating complex target functions. However, models based on rectified linear units can suffer reduction in representation capacity due to dead units. Moreover, approximating very deep networks trained with dropout at test time can be more inexact due to the several layers of non-linearities. To address the aforementioned problems, we propose to learn the activation functions of hidden units for very deep networks via maxout. However, maxout units increase the model parameters, and therefore model may suffer from overfitting; we alleviate this problem by employing elastic net regularization. In this paper, we propose very deep networks with maxout units and elastic net regularization and show that the features learned are quite linearly separable. We perform extensive experiments and reach state-of-the-art results on the USPS and MNIST datasets. Particularly, we reach an error rate of 2.19% on the USPS dataset, surpassing the human performance error rate of 2.5% and all previously reported results, including those that employed training data augmentation. On the MNIST dataset, we reach an error rate of 0.36% which is competitive with the state-of-the-art results. [less ▲]

Detailed reference viewed: 83 (16 UL)
Full Text
Peer Reviewed
See detailAnticipating Suspicious Actions using a Small Dataset of Action Templates
Baptista, Renato UL; Antunes, Michel; Aouada, Djamila UL et al

in 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP) (2018, January)

In this paper, we propose to detect an action as soon as possible and ideally before it is fully completed. The objective is to support the monitoring of surveillance videos for preventing criminal or ... [more ▼]

In this paper, we propose to detect an action as soon as possible and ideally before it is fully completed. The objective is to support the monitoring of surveillance videos for preventing criminal or terrorist attacks. For such a scenario, it is of importance to have not only high detection and recognition rates but also low time latency for the detection. Our solution consists in an adaptive sliding window approach in an online manner, which efficiently rejects irrelevant data. Furthermore, we exploit both spatial and temporal information by constructing feature vectors based on temporal blocks. For an added efficiency, only partial template actions are considered for the detection. The relationship between the template size and latency is experimentally evaluated. We show promising preliminary experimental results using Motion Capture data with a skeleton representation of the human body. [less ▲]

Detailed reference viewed: 73 (10 UL)
Full Text
Peer Reviewed
See detailFull 3D Reconstruction of Non-Rigidly Deforming Objects
Afzal, Hassan; Aouada, Djamila UL; Mirbach, Bruno et al

in ACM Transactions on Multimedia Computing, Communications, & Applications (2018)

Detailed reference viewed: 53 (4 UL)
Full Text
Peer Reviewed
See detailA Revisit of Action Detection using Improved Trajectories
Papadopoulos, Konstantinos UL; Antunes, Michel; Aouada, Djamila UL et al

in IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 15–20 April 2018 (2018)

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition ... [more ▼]

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition. In temporal action localization, however, this approach is not efficiently exploited. Trajectory features extracted from uniform video segments result in significant performance degradation due to two reasons: (a) during uniform segmentation, a significant amount of noise is often added to the main action and (b) partial actions can have negative impact in classifier's performance. Since uniform video segmentation seems to be insufficient for this task, we propose a two-step supervised non-uniform segmentation, performed in an online manner. Action proposals are generated using either 2D or 3D data, therefore action classification can be directly performed on them using the standard improved trajectories approach. We experimentally compare our method with other approaches and we show improved performance on a challenging online action detection dataset. [less ▲]

Detailed reference viewed: 71 (15 UL)
Full Text
Peer Reviewed
See detailDeformation Based 3D Facial Expression Representation
Demisse, Girum UL; Aouada, Djamila UL; Ottersten, Björn UL

in ACM Transactions on Multimedia Computing, Communications, & Applications (2018)

We propose a deformation based representation for analyzing expressions from 3D faces. A point cloud of a 3D face is decomposed into an ordered deformable set of curves that start from a fixed point ... [more ▼]

We propose a deformation based representation for analyzing expressions from 3D faces. A point cloud of a 3D face is decomposed into an ordered deformable set of curves that start from a fixed point. Subsequently, a mapping function is defined to identify the set of curves with an element of a high dimensional matrix Lie group, specifically the direct product of SE(3). Representing 3D faces as an element of a high dimensional Lie group has two main advantages. First, using the group structure, facial expressions can be decoupled from a neutral face. Second, an underlying non-linear facial expression manifold can be captured with the Lie group and mapped to a linear space, Lie algebra of the group. This opens up the possibility of classifying facial expressions with linear models without compromising the underlying manifold. Alternatively, linear combinations of linearised facial expressions can be mapped back from the Lie algebra to the Lie group. The approach is tested on the BU-3DFE and the Bosphorus datasets. The results show that the proposed approach performed comparably, on the BU-3DFE dataset, without using features or extensive landmark points. [less ▲]

Detailed reference viewed: 81 (17 UL)
Full Text
See detailTowards Automatic Human Body Model Fitting to a 3D Scan
Saint, Alexandre Fabian A UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in D'APUZZO, Nicola (Ed.) Proceedings of 3DBODY.TECH 2017 - 8th International Conference and Exhibition on 3D Body Scanning and Processing Technologies, Montreal QC, Canada, 11-12 Oct. 2017 (2017, October)

This paper presents a method to automatically recover a realistic and accurate body shape of a person wearing clothing from a 3D scan. Indeed, in many practical situations, people are scanned wearing ... [more ▼]

This paper presents a method to automatically recover a realistic and accurate body shape of a person wearing clothing from a 3D scan. Indeed, in many practical situations, people are scanned wearing clothing. The underlying body shape is thus partially or completely occluded. Yet, it is very desirable to recover the shape of a covered body as it provides non-invasive means of measuring and analysing it. This is particularly convenient for patients in medical applications, customers in a retail shop, as well as in security applications where suspicious objects under clothing are to be detected. To recover the body shape from the 3D scan of a person in any pose, a human body model is usually fitted to the scan. Current methods rely on the manual placement of markers on the body to identify anatomical locations and guide the pose fitting. The markers are either physically placed on the body before scanning or placed in software as a postprocessing step. Some other methods detect key points on the scan using 3D feature descriptors to automate the placement of markers. They usually require a large database of 3D scans. We propose to automatically estimate the body pose of a person from a 3D mesh acquired by standard 3D body scanners, with or without texture. To fit a human model to the scan, we use joint locations as anchors. These are detected from multiple 2D views using a conventional body joint detector working on images. In contrast to existing approaches, the proposed method is fully automatic, and takes advantage of the robustness of state-of-art 2D joint detectors. The proposed approach is validated on scans of people in different poses wearing garments of various thicknesses and on scans of one person in multiple poses with known ground truth wearing close-fitting clothing. [less ▲]

Detailed reference viewed: 87 (22 UL)
Full Text
Peer Reviewed
See detailFacial Expression Recognition via Joint Deep Learning of RGB-Depth Map Latent Representations
Oyedotun, Oyebade UL; Demisse, Girum UL; Shabayek, Abd El Rahman UL et al

in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) (2017, August 21)

Humans use facial expressions successfully for conveying their emotional states. However, replicating such success in the human-computer interaction domain is an active research problem. In this paper, we ... [more ▼]

Humans use facial expressions successfully for conveying their emotional states. However, replicating such success in the human-computer interaction domain is an active research problem. In this paper, we propose deep convolutional neural network (DCNN) for joint learning of robust facial expression features from fused RGB and depth map latent representations. We posit that learning jointly from both modalities result in a more robust classifier for facial expression recognition (FER) as opposed to learning from either of the modalities independently. Particularly, we construct a learning pipeline that allows us to learn several hierarchical levels of feature representations and then perform the fusion of RGB and depth map latent representations for joint learning of facial expressions. Our experimental results on the BU-3DFE dataset validate the proposed fusion approach, as a model learned from the joint modalities outperforms models learned from either of the modalities. [less ▲]

Detailed reference viewed: 183 (44 UL)
Full Text
Peer Reviewed
See detailTraining Very Deep Networks via Residual Learning with Stochastic Input Shortcut Connections
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in 24th International Conference on Neural Information Processing, Guangzhou, China, November 14–18, 2017 (2017, July 31)

Many works have posited the benefit of depth in deep networks. However, one of the problems encountered in the training of very deep networks is feature reuse; that is, features are ’diluted’ as they are ... [more ▼]

Many works have posited the benefit of depth in deep networks. However, one of the problems encountered in the training of very deep networks is feature reuse; that is, features are ’diluted’ as they are forward propagated through the model. Hence, later network layers receive less informative signals about the input data, consequently making training less effective. In this work, we address the problem of feature reuse by taking inspiration from an earlier work which employed residual learning for alleviating the problem of feature reuse. We propose a modification of residual learning for training very deep networks to realize improved generalization performance; for this, we allow stochastic shortcut connections of identity mappings from the input to hidden layers.We perform extensive experiments using the USPS and MNIST datasets. On the USPS dataset, we achieve an error rate of 2.69% without employing any form of data augmentation (or manipulation). On the MNIST dataset, we reach a comparable state-of-the-art error rate of 0.52%. Particularly, these results are achieved without employing any explicit regularization technique. [less ▲]

Detailed reference viewed: 134 (40 UL)
Full Text
Peer Reviewed
See detailFlexible Feedback System for Posture Monitoring and Correction
Baptista, Renato UL; Antunes, Michel; Shabayek, Abd El Rahman UL et al

in IEEE International Conference on Image Information Processing (ICIIP) (2017)

In this paper, we propose a framework for guiding patients and/or users in how to correct their posture in real-time without requiring a physical or a direct intervention of a therapist or a sports ... [more ▼]

In this paper, we propose a framework for guiding patients and/or users in how to correct their posture in real-time without requiring a physical or a direct intervention of a therapist or a sports specialist. In order to support posture monitoring and correction, this paper presents a flexible system that continuously evaluates postural defects of the user. In case deviations from a correct posture are identified, then feedback information is provided in order to guide the user to converge to an appropriate and stable body condition. The core of the proposed approach is the analysis of the motion required for aligning body-parts with respect to postural constraints and pre-specified template skeleton poses. Experimental results in two scenarios (sitting and weight lifting) show the potential of the proposed framework. [less ▲]

Detailed reference viewed: 137 (35 UL)
Full Text
Peer Reviewed
See detailEnhanced Trajectory-based Action Recognition using Human Pose
Papadopoulos, Konstantinos UL; Goncalves Almeida Antunes, Michel UL; Aouada, Djamila UL et al

in IEEE International Conference on Image Processing, Beijing 17-20 Spetember 2017 (2017)

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of ... [more ▼]

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of-Words model. Also, there is a significant amount of extracted trajectory features that are actually irrelevant to the activity being analyzed, which can considerably degrade the recognition performance. In this paper, we propose a human-tailored trajectory extraction scheme, in which trajectories are clustered using information from the human pose. Two configurations are considered; first, when exact skeleton joint positions are provided, and second, when only an estimate thereof is available. In both cases, the proposed method is further strengthened by using the concept of local Bag-of-Words, where a specific codebook is generated for each skeleton joint group. This has the advantage of adding spatial human pose awareness in the video representation, effectively increasing its discriminative power. We experimentally compare the proposed method with the standard dense trajectories approach on two challenging datasets. [less ▲]

Detailed reference viewed: 148 (49 UL)
Full Text
Peer Reviewed
See detailDEFORMATION TRANSFER OF 3D HUMAN SHAPES AND POSES ON MANIFOLDS
Shabayek, Abd El Rahman UL; Aouada, Djamila UL; Saint, Alexandre Fabian A UL et al

in IEEE International Conference on Image Processing, Beijing 17-20 Spetember 2017 (2017)

Detailed reference viewed: 188 (44 UL)
Full Text
Peer Reviewed
See detailVideo-Based Feedback for Assisting Physical Activity
Baptista, Renato UL; Goncalves Almeida Antunes, Michel UL; Aouada, Djamila UL et al

in 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP) (2017)

In this paper, we explore the concept of providing feedback to a user moving in front of a depth camera so that he is able to replicate a specific template action. This can be used as a home based ... [more ▼]

In this paper, we explore the concept of providing feedback to a user moving in front of a depth camera so that he is able to replicate a specific template action. This can be used as a home based rehabilitation system for stroke survivors, where the objective is for patients to practice and improve their daily life activities. Patients are guided in how to correctly perform an action by following feedback proposals. These proposals are presented in a human interpretable way. In order to align an action that was performed with the template action, we explore two different approaches, namely, Subsequence Dynamic Time Warping and Temporal Commonality Discovery. The first method aims to find the temporal alignment and the second one discovers the interval of the subsequence that shares similar content, after which standard Dynamic Time Warping can be used for the temporal alignment. Then, feedback proposals can be provided in order to correct the user with respect to the template action. Experimental results show that both methods have similar accuracy rate and the computational time is a decisive factor, where Subsequence Dynamic Time Warping achieves faster results. [less ▲]

Detailed reference viewed: 217 (62 UL)
Full Text
Peer Reviewed
See detailUnsupervised Vanishing Point Detection and Camera Calibration from a Single Manhattan Image with Radial Distortion
Goncalves Almeida Antunes, Michel UL; Barreto, Joao P.; Aouada, Djamila UL et al

in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 (2017)

The article concerns the automatic calibration of a camera with radial distortion from a single image. It is known that, under the mild assumption of square pixels and zero skew, lines in the scene ... [more ▼]

The article concerns the automatic calibration of a camera with radial distortion from a single image. It is known that, under the mild assumption of square pixels and zero skew, lines in the scene project into circles in the image, and three lines suffice to calibrate the camera up to an ambiguity between focal length and radial distortion. The calibration results highly depend on accurate circle estimation, which is hard to accomplish, because lines tend to project into short circular arcs. To overcome this problem, we show that, given a short circular arc edge, it is possible to robustly determine a line that goes through the center of the corresponding circle. These lines, henceforth called Lines of Circle Centres (LCCs), are used in a new method that detects sets of parallel lines and estimates the calibration parameters, including the center and amount of distortion, focal length, and camera orientation with respect to the Manhattan frame. Extensive experiments in both semi-synthetic and real images show that our algorithm outperforms state- of-the-art approaches in unsupervised calibration from a single image, while providing more information. [less ▲]

Detailed reference viewed: 415 (22 UL)
Full Text
Peer Reviewed
See detailFraud Detection by Stacking Cost-Sensitive Decision Trees
Correa Bahnsen, Alejandro; Villegas, Sergio; Aouada, Djamila UL et al

in Data Science for Cyber-Security (DSCS), London 25-27 September (2017)

Detailed reference viewed: 86 (2 UL)
Full Text
Peer Reviewed
See detailDeformation Based Curved Shape Representation
Demisse, Girum UL; Aouada, Djamila UL; Ottersten, Björn UL

in IEEE Transactions on Pattern Analysis & Machine Intelligence (2017)

In this paper, we introduce a deformation based representation space for curved shapes in Rn. Given an ordered set of points sampled from a curved shape, the proposed method represents the set as an ... [more ▼]

In this paper, we introduce a deformation based representation space for curved shapes in Rn. Given an ordered set of points sampled from a curved shape, the proposed method represents the set as an element of a finite dimensional matrix Lie group. Variation due to scale and location are filtered in a preprocessing stage, while shapes that vary only in rotation are identified by an equivalence relationship. The use of a finite dimensional matrix Lie group leads to a similarity metric with an explicit geodesic solution. Subsequently, we discuss some of the properties of the metric and its relationship with a deformation by least action. Furthermore, invariance to reparametrization or estimation of point correspondence between shapes is formulated as an estimation of sampling function. Thereafter, two possible approaches are presented to solve the point correspondence estimation problem. Finally, we propose an adaptation of k-means clustering for shape analysis in the proposed representation space. Experimental results show that the proposed representation is robust to uninformative cues, e.g. local shape perturbation and displacement. In comparison to state of the art methods, it achieves a high precision on the Swedish and the Flavia leaf datasets and a comparable result on MPEG-7, Kimia99 and Kimia216 datasets. [less ▲]

Detailed reference viewed: 193 (26 UL)
Full Text
Peer Reviewed
See detailReal-Time Enhancement of Dynamic Depth Videos with Non-Rigid Deformations
Al Ismaeil, Kassem; Aouada, Djamila UL; Solignac, Thomas et al

in IEEE Transactions on Pattern Analysis & Machine Intelligence (2016), 39(10), 2045-2059

We propose a novel approach for enhancing depth videos containing non-rigidly deforming objects. Depth sensors are capable of capturing depth maps in real-time but suffer from high noise levels and low ... [more ▼]

We propose a novel approach for enhancing depth videos containing non-rigidly deforming objects. Depth sensors are capable of capturing depth maps in real-time but suffer from high noise levels and low spatial resolutions. While solutions for reconstructing 3D details in static scenes, or scenes with rigid global motions have been recently proposed, handling unconstrained non-rigid deformations in relative complex scenes remains a challenge. Our solution consists in a recursive dynamic multi-frame superresolution algorithm where the relative local 3D motions between consecutive frames are directly accounted for. We rely on the assumption that these 3D motions can be decoupled into lateral motions and radial displacements. This allows to perform a simple local per-pixel tracking where both depth measurements and deformations are dynamically optimized. The geometric smoothness is subsequently added using a multi-level L1 minimization with a bilateral total variation regularization. The performance of this method is thoroughly evaluated on both real and synthetic data. As compared to alternative approaches, the results show a clear improvement in reconstruction accuracy and in robustness to noise, to relative large non-rigid deformations, and to topological changes. Moreover, the proposed approach, implemented on a CPU, is shown to be computationally efficient and working in real-time. [less ▲]

Detailed reference viewed: 178 (13 UL)
Full Text
Peer Reviewed
See detailSimilarity Metric For Curved Shapes In Euclidean Space
Demisse, Girum UL; Aouada, Djamila UL; Ottersten, Björn UL

in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 (2016, June 26)

In this paper, we introduce a similarity metric for curved shapes that can be described, distinctively, by ordered points. The proposed method represents a given curve as a point in the deformation space ... [more ▼]

In this paper, we introduce a similarity metric for curved shapes that can be described, distinctively, by ordered points. The proposed method represents a given curve as a point in the deformation space, the direct product of rigid transformation matrices, such that the successive action of the matrices on a fixed starting point reconstructs the full curve. In general, both open and closed curves are represented in the deformation space modulo shape orientation and orientation preserving diffeomorphisms. The use of direct product Lie groups to represent curved shapes led to an explicit formula for geodesic curves and the formulation of a similarity metric between shapes by the $L^{2}$-norm on the Lie algebra. Additionally, invariance to reparametrization or estimation of point correspondence between shapes is performed as an intermediate step for computing geodesics. Furthermore, since there is no computation of differential quantities on the curves, our representation is more robust to local perturbations and needs no pre-smoothing. We compare our method with the elastic shape metric defined through the square root velocity (SRV) mapping, and other shape matching approaches [less ▲]

Detailed reference viewed: 286 (47 UL)
Full Text
Peer Reviewed
See detailEnhancement of Dynamic Depth Scenes by Upsampling for Precise Super-Resolution (UP-SR)
Al Ismaeil, Kassem; Aouada, Djamila UL; Mirbach, Bruno et al

in Computer Vision and Image Understanding (2016)

Multi-frame super-resolution is the process of recovering a high resolution image or video from a set of captured low resolution images. Super-resolution approaches have been largely explored in 2-D ... [more ▼]

Multi-frame super-resolution is the process of recovering a high resolution image or video from a set of captured low resolution images. Super-resolution approaches have been largely explored in 2-D imaging. However, their extension to depth videos is not straightforward due to the textureless nature of depth data, and to their high frequency contents coupled with fast motion artifacts. Recently, few attempts have been introduced where only the super-resolution of static depth scenes has been addressed. In this work, we propose to enhance the resolution of dynamic depth videos with non-rigidly moving objects. The proposed approach is based on a new data model that uses densely upsampled, and cumulatively registered versions of the observed low resolution depth frames. We show the impact of upsampling in increasing the sub-pixel accuracy and reducing the rounding error of the motion vectors. Furthermore, with the proposed cumulative motion estimation, a high registration accuracy is achieved between non-successive upsampled frames with relative large motions. A statistical performance analysis is derived in terms of mean square error explaining the effect of the number of observed frames and the effect of the super-resolution factor at a given noise level. We evaluate the accuracy of the proposed algorithm theoretically and experimentally as function of the SR factor, and the level of contaminations with noise. Experimental results on both real and synthetic data show the effectiveness of the proposed algorithm on dynamic depth videos as compared to state-of-art methods. [less ▲]

Detailed reference viewed: 168 (18 UL)