References of "Papadopoulos, Konstantinos 50022363"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailLeveraging High-Frequency Components for Deepfake Detection
Mejri, Nesryne UL; Papadopoulos, Konstantinos UL; Aouada, Djamila UL

in IEEE Workshop on Multimedia Signal Processing (2021)

In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily ... [more ▼]

In the past years, RGB-based deepfake detection has shown notable progress thanks to the development of effective deep neural networks. However, the performance of deepfake detectors remains primarily dependent on the quality of the forged content and the level of artifacts introduced by the forgery method. To detect these artifacts, it is often necessary to separate and analyze the frequency components of an image. In this context, we propose to utilize the high-frequency components of color images by introducing an end-to-end trainable module that (a) extracts features from high-frequency components and (b) fuses them with the features of the RGB input. The module not only exploits the high-frequency anomalies present in manipulated images but also can be used with most RGB-based deepfake detectors. Experimental results show that the proposed approach boosts the performance of state-of-the-art networks, such as XceptionNet and EfficientNet, on a challenging deepfake dataset. [less ▲]

Detailed reference viewed: 239 (71 UL)
Full Text
Peer Reviewed
See detailVertex Feature Encoding and Hierarchical Temporal Modeling in a Spatio-Temporal Graph Convolutional Network for Action Recognition
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Aouada, Djamila UL et al

in International Conference on Pattern Recognition, Milan 10-15 January 2021 (2021)

Detailed reference viewed: 150 (28 UL)
Peer Reviewed
See detailSHARP 2020: The 1st Shape Recovery from Partial Textured 3D Scans Challenge Results
Saint, Alexandre Fabian A UL; Kacem, Anis UL; Cherenkova, Kseniya UL et al

Scientific Conference (2020, August 23)

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw ... [more ▼]

The SHApe Recovery from Partial textured 3D scans challenge, SHARP 2020, is the first edition of a challenge fostering and benchmarking methods for recovering complete textured 3D scans from raw incomplete data. SHARP 2020 is organized as a workshop in conjunction with ECCV 2020. There are two complementary challenges, the first one on 3D human scans, and the second one on generic objects. Challenge 1 is further split into two tracks, focusing, first, on large body and clothing regions, and, second, on fine body details. A novel evaluation metric is proposed to quantify jointly the shape reconstruction, the texture reconstruction, and the amount of completed data. Additionally, two unique datasets of 3D scans are proposed, to provide raw ground-truth data for the benchmarks. The datasets are released to the scientific community. Moreover, an accompanying custom library of software routines is also released to the scientific community. It allows for processing 3D scans, generating partial data and performing the evaluation. Results of the competition, analyzed in comparison to baselines, show the validity of the proposed evaluation metrics and highlight the challenging aspects of the task and of the datasets. Details on the SHARP 2020 challenge can be found at https://cvi2.uni.lu/sharp2020/ [less ▲]

Detailed reference viewed: 168 (8 UL)
Full Text
Peer Reviewed
See detailDeepVI: A Novel Framework for Learning Deep View-Invariant Human Action Representations using a Single RGB Camera
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Oyedotun, Oyebade UL et al

in IEEE International Conference on Automatic Face and Gesture Recognition, Buenos Aires 18-22 May 2020 (2020)

Detailed reference viewed: 141 (19 UL)
Full Text
Peer Reviewed
See detailVIEW-INVARIANT ACTION RECOGNITION FROM RGB DATA VIA 3D POSE ESTIMATION
Baptista, Renato UL; Ghorbel, Enjie UL; Papadopoulos, Konstantinos UL et al

in IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019 (2019, May)

In this paper, we propose a novel view-invariant action recognition method using a single monocular RGB camera. View-invariance remains a very challenging topic in 2D action recognition due to the lack of ... [more ▼]

In this paper, we propose a novel view-invariant action recognition method using a single monocular RGB camera. View-invariance remains a very challenging topic in 2D action recognition due to the lack of 3D information in RGB images. Most successful approaches make use of the concept of knowledge transfer by projecting 3D synthetic data to multiple viewpoints. Instead of relying on knowledge transfer, we propose to augment the RGB data by a third dimension by means of 3D skeleton estimation from 2D images using a CNN-based pose estimator. In order to ensure view-invariance, a pre-processing for alignment is applied followed by data expansion as a way for denoising. Finally, a Long-Short Term Memory (LSTM) architecture is used to model the temporal dependency between skeletons. The proposed network is trained to directly recognize actions from aligned 3D skeletons. The experiments performed on the challenging Northwestern-UCLA dataset show the superiority of our approach as compared to state-of-the-art ones. [less ▲]

Detailed reference viewed: 293 (32 UL)
Full Text
Peer Reviewed
See detailA View-invariant Framework for Fast Skeleton-based Action Recognition Using a Single RGB Camera
Ghorbel, Enjie UL; Papadopoulos, Konstantinos UL; Baptista, Renato UL et al

in 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, 25-27 February 2018 (2019, February)

View-invariant action recognition using a single RGB camera represents a very challenging topic due to the lack of 3D information in RGB images. Lately, the recent advances in deep learning made it ... [more ▼]

View-invariant action recognition using a single RGB camera represents a very challenging topic due to the lack of 3D information in RGB images. Lately, the recent advances in deep learning made it possible to extract a 3D skeleton from a single RGB image. Taking advantage of this impressive progress, we propose a simple framework for fast and view-invariant action recognition using a single RGB camera. The proposed pipeline can be seen as the association of two key steps. The first step is the estimation of a 3D skeleton from a single RGB image using a CNN-based pose estimator such as VNect. The second one aims at computing view-invariant skeleton-based features based on the estimated 3D skeletons. Experiments are conducted on two well-known benchmarks, namely, IXMAS and Northwestern-UCLA datasets. The obtained results prove the validity of our concept, which suggests a new way to address the challenge of RGB-based view-invariant action recognition. [less ▲]

Detailed reference viewed: 475 (23 UL)
Full Text
Peer Reviewed
See detailTwo-stage RGB-based Action Detection using Augmented 3D Poses
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Baptista, Renato UL et al

in 18th International Conference on Computer Analysis of Images and Patterns SALERNO, 3-5 SEPTEMBER, 2019 (2019)

In this paper, a novel approach for action detection from RGB sequences is proposed. This concept takes advantage of the recent development of CNNs to estimate 3D human poses from a monocular camera. To ... [more ▼]

In this paper, a novel approach for action detection from RGB sequences is proposed. This concept takes advantage of the recent development of CNNs to estimate 3D human poses from a monocular camera. To show the validity of our method, we propose a 3D skeleton-based two-stage action detection approach. For localizing actions in unsegmented sequences, Relative Joint Position (RJP) and Histogram Of Displacements (HOD) are used as inputs to a k-nearest neighbor binary classifier in order to define action segments. Afterwards, to recognize the localized action proposals, a compact Long Short-Term Memory (LSTM) network with a de-noising expansion unit is employed. Compared to previous RGB-based methods, our approach offers robustness to radial motion, view-invariance and low computational complexity. Results on the Online Action Detection dataset show that our method outperforms earlier RGB-based approaches. [less ▲]

Detailed reference viewed: 199 (11 UL)
Full Text
Peer Reviewed
See detailLocalized Trajectories for 2D and 3D Action Recognition
Papadopoulos, Konstantinos UL; Demisse, Girum UL; Ghorbel, Enjie UL et al

in Sensors (2019)

The Dense Trajectories concept is one of the most successful approaches in action recognition, suitable for scenarios involving a significant amount of motion. However, due to noise and background motion ... [more ▼]

The Dense Trajectories concept is one of the most successful approaches in action recognition, suitable for scenarios involving a significant amount of motion. However, due to noise and background motion, many generated trajectories are irrelevant to the actual human activity and can potentially lead to performance degradation. In this paper, we propose Localized Trajectories as an improved version of Dense Trajectories where motion trajectories are clustered around human body joints provided by RGB-D cameras and then encoded by local Bag-of-Words. As a result, the Localized Trajectories concept provides an advanced discriminative representation of actions. Moreover, we generalize Localized Trajectories to 3D by using the depth modality. One of the main advantages of 3D Localized Trajectories is that they describe radial displacements that are perpendicular to the image plane. Extensive experiments and analysis were carried out on five different datasets. [less ▲]

Detailed reference viewed: 342 (17 UL)
Full Text
Peer Reviewed
See detailPose Encoding for Robust Skeleton-Based Action Recognition
Demisse, Girum UL; Papadopoulos, Konstantinos UL; Aouada, Djamila UL et al

in CVPRW: Visual Understanding of Humans in Crowd Scene, Salt Lake City, Utah, June 18-22, 2018 (2018, June 18)

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for ... [more ▼]

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for filtering linear noise transformations, but neglect to address potential nonlinear transformations. In this paper, we present an unsupervised learning approach for estimating nonlinear noise transformations in pose estimates. Our approach starts by decoupling linear and nonlinear noise transformations. While the linear transformations are modelled explicitly the nonlinear transformations are learned from data. Subsequently, we use an autoencoder with L2-norm reconstruction error and show that it indeed does capture nonlinear noise transformations, and recover a denoised pose estimate which in turn improves performance significantly. We validate our approach on a publicly available dataset, NW-UCLA. [less ▲]

Detailed reference viewed: 285 (46 UL)
Full Text
Peer Reviewed
See detailA Revisit of Action Detection using Improved Trajectories
Papadopoulos, Konstantinos UL; Antunes, Michel; Aouada, Djamila UL et al

in IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 15–20 April 2018 (2018)

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition ... [more ▼]

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition. In temporal action localization, however, this approach is not efficiently exploited. Trajectory features extracted from uniform video segments result in significant performance degradation due to two reasons: (a) during uniform segmentation, a significant amount of noise is often added to the main action and (b) partial actions can have negative impact in classifier's performance. Since uniform video segmentation seems to be insufficient for this task, we propose a two-step supervised non-uniform segmentation, performed in an online manner. Action proposals are generated using either 2D or 3D data, therefore action classification can be directly performed on them using the standard improved trajectories approach. We experimentally compare our method with other approaches and we show improved performance on a challenging online action detection dataset. [less ▲]

Detailed reference viewed: 256 (27 UL)
Full Text
See detailSTARR - Decision SupporT and self-mAnagement system for stRoke survivoRs Vision based Rehabilitation System
Shabayek, Abd El Rahman UL; Baptista, Renato UL; Papadopoulos, Konstantinos UL et al

in European Project Space on Networks, Systems and Technologies (2017)

This chapter explains a vision based platform developed within a European project on decision support and self-management for stroke survivors. The objective is to provide a low cost home rehabilitation ... [more ▼]

This chapter explains a vision based platform developed within a European project on decision support and self-management for stroke survivors. The objective is to provide a low cost home rehabilitation system. Our main concern is to maintain the patients' physical activity while carrying a continuous monitoring of his physical and emotional state. This is essential for recovering some autonomy in daily life activities and preventing a second damaging stroke. Post-stroke patients are initially subject to physical therapy under the supervision of a health professional to follow up on their daily physical activity and monitor their emotional state. However, due to social and economical constraints, home based rehabilitation is eventually suggested. Our vision platform paves the way towards having low cost home rehabilitation. [less ▲]

Detailed reference viewed: 230 (6 UL)
Full Text
Peer Reviewed
See detailEnhanced Trajectory-based Action Recognition using Human Pose
Papadopoulos, Konstantinos UL; Goncalves Almeida Antunes, Michel UL; Aouada, Djamila UL et al

in IEEE International Conference on Image Processing, Beijing 17-20 Spetember 2017 (2017)

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of ... [more ▼]

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of-Words model. Also, there is a significant amount of extracted trajectory features that are actually irrelevant to the activity being analyzed, which can considerably degrade the recognition performance. In this paper, we propose a human-tailored trajectory extraction scheme, in which trajectories are clustered using information from the human pose. Two configurations are considered; first, when exact skeleton joint positions are provided, and second, when only an estimate thereof is available. In both cases, the proposed method is further strengthened by using the concept of local Bag-of-Words, where a specific codebook is generated for each skeleton joint group. This has the advantage of adding spatial human pose awareness in the video representation, effectively increasing its discriminative power. We experimentally compare the proposed method with the standard dense trajectories approach on two challenging datasets. [less ▲]

Detailed reference viewed: 346 (62 UL)