![]() Garcia Becerro, Frederic ![]() ![]() in Image and Vision Computing (2015), 41 Detailed reference viewed: 204 (3 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in Decision Analytics (2015), 2(5), Customer churn predictive modeling deals with predicting the probability of a customer defecting using historical, behavioral and socio-economical information. This tool is of great benefit to ... [more ▼] Customer churn predictive modeling deals with predicting the probability of a customer defecting using historical, behavioral and socio-economical information. This tool is of great benefit to subscription based companies allowing them to maximize the results of retention campaigns. The problem of churn predictive modeling has been widely studied by the data mining and machine learning communities. It is usually tackled by using classification algorithms in order to learn the different patterns of both the churners and non-churners. Nevertheless, current state-of-the-art classification algorithms are not well aligned with commercial goals, in the sense that, the models miss to include the real financial costs and benefits during the training and evaluation phases. In the case of churn, evaluating a model based on a traditional measure such as accuracy or predictive power, does not yield to the best results when measured by the actual financial cost, ie. investment per subscriber on a loyalty campaign and the financial impact of failing to detect a real churner versus wrongly predicting a non-churner as a churner. In this paper, we present a new cost-sensitive framework for customer churn predictive modeling. First we propose a new financial based measure for evaluating the effectiveness of a churn campaign taking into account the available portfolio of offers, their individual financial cost and probability of offer acceptance depending on the customer profile. Then, using a real-world churn dataset we compare different cost-insensitive and cost-sensitive classification algorithms and measure their effectiveness based on their predictive power and also the cost optimization. The results show that using a cost-sensitive approach yields to an increase in cost savings of up to 26.4% [less ▲] Detailed reference viewed: 266 (5 UL)![]() Al Ismaeil, Kassem ![]() ![]() in IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'15), (Best paper award) (2015, June 12) This paper proposes to enhance low resolution dynamic depth videos containing freely non–rigidly moving objects with a new dynamic multi–frame super–resolution algorithm. Existent methods are either ... [more ▼] This paper proposes to enhance low resolution dynamic depth videos containing freely non–rigidly moving objects with a new dynamic multi–frame super–resolution algorithm. Existent methods are either limited to rigid objects, or restricted to global lateral motions discarding radial displacements. We address these shortcomings by accounting for non–rigid displacements in 3D. In addition to 2D optical flow, we estimate the depth displacement, and simultaneously correct the depth measurement by Kalman filtering. This concept is incorporated efficiently in a multi–frame super–resolution framework. It is formulated in a recursive manner that ensures an efficient deployment in real–time. Results show the overall improved performance of the proposed method as compared to alternative approaches, and specifically in handling relatively large 3D motions. Test examples range from a full moving human body to a highly dynamic facial video with varying expressions. [less ▲] Detailed reference viewed: 264 (21 UL)![]() Aouada, Djamila ![]() ![]() ![]() in 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP'15) (2015, March) All existent methods for the statistical analysis of super–resolution approaches have stopped at the variance term, not accounting for the bias in the mean square error. In this paper we give an original ... [more ▼] All existent methods for the statistical analysis of super–resolution approaches have stopped at the variance term, not accounting for the bias in the mean square error. In this paper we give an original derivation of the bias term. We propose to use a patch-based method inspired by the work of (Chatterjee and Milanfar, 2009). Our approach, however, is completely new as we derive a new affine bias model dedicated for the multi-frame super resolution framework. We apply the proposed statistical performance analysis to the Upsampling for Precise Super–Resolution (UP-SR) algorithm. This algorithm was shown experimentally to be a good solution for enhancing the resolution of depth sequences in both cases of global and local motions. Its performance is herein analyzed theoretically in terms of its approximated mean square error, using the proposed derivation of the bias. This analysis is validated experimentally on simulated static and dynamic depth sequences with a known ground truth. This provides an insightful understanding of the effects of noise variance, number of observed low resolution frames, and super–resolution factor on the final and intermediate performance of UP–SR. Our conclusion is that increasing the number of frames should improve the performance while the error is increased due to local motions, and to the upsampling which is part of UP-SR. [less ▲] Detailed reference viewed: 259 (13 UL)![]() ; Aouada, Djamila ![]() ![]() in arXiv preprint arXiv:1505.04637 (2015) Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard ... [more ▼] Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. In previous works, some methods that take into account the financial costs into the training of different algorithms have been proposed, with the example-dependent cost-sensitive decision tree algorithm being the one that gives the highest savings. In this paper we propose a new framework of ensembles of example-dependent cost-sensitive decision-trees. The framework consists in creating different example-dependent cost-sensitive decision trees on random subsamples of the training set, and then combining them using three different combination approaches. Moreover, we propose two new cost-sensitive combination approaches; cost-sensitive weighted voting and cost-sensitive stacking, the latter being based on the cost-sensitive logistic regression method. Finally, using five different databases, from four real-world applications: credit card fraud detection, churn modeling, credit scoring and direct marketing, we evaluate the proposed method against state-of-the-art example-dependent cost-sensitive techniques, namely, cost-proportionate sampling, Bayes minimum risk and cost-sensitive decision trees. The results show that the proposed algorithms have better results for all databases, in the sense of higher savings. [less ▲] Detailed reference viewed: 133 (12 UL)![]() Afzal, Hassan ![]() ![]() in 16th International Conference on Computer Analysis of Images and Patterns (2015) In this paper, we target enhanced 3D reconstruction of non-rigidly deforming objects based on a view-independent surface representation with an automated recursive filtering scheme. This work improves ... [more ▼] In this paper, we target enhanced 3D reconstruction of non-rigidly deforming objects based on a view-independent surface representation with an automated recursive filtering scheme. This work improves upon the KinectDeform algorithm which we recently proposed. KinectDeform uses an implicit viewdependent volumetric truncated signed distance function (TSDF) based surface representation. The view-dependence makes its pipeline complex by requiring surface prediction and extraction steps based on camera’s field of view. This paper proposes to use an explicit projection-based Moving Least Squares (MLS) surface representation from point-sets. Moreover, the empirical weighted filtering scheme in KinectDeform is replaced by an automated fusion scheme based on a Kalman filter. We analyze the performance of the proposed algorithm both qualitatively and quantitatively and show that it is able to produce enhanced and feature preserving 3D reconstructions. [less ▲] Detailed reference viewed: 370 (34 UL)![]() Demisse, Girum ![]() ![]() ![]() in 22nd IEEE International Conference on Image Processing (2015) A statistical model for shapes in $\mathbb{R}^2$ or $\mathbb{R}^3$ is proposed. Shape modelling is a difficult problem mainly due to the non-linear nature of its space. Our approach considers curves as ... [more ▼] A statistical model for shapes in $\mathbb{R}^2$ or $\mathbb{R}^3$ is proposed. Shape modelling is a difficult problem mainly due to the non-linear nature of its space. Our approach considers curves as shape contours, and models their deformations with respect to a deformable template shape. Contours are uniformly sampled into a discrete sequence of points. Hence, the deformation of a shape is formulated as an action of transformation matrices on each of these points. A parametrized stochastic model based on Markov process is proposed to model shape variability in the deformation space. The model's parameters are estimated from a labeled training dataset. Moreover, a similarity metric based on the Mahalanobis distance is proposed. Subsequently, the model has been successfully tested for shape recognition, synthesis, and retrieval. [less ▲] Detailed reference viewed: 293 (52 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in 2014 13th International Conference on Machine Learning and Applications (2014, December 03) Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. Credit scoring is a typical example of cost ... [more ▼] Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. Credit scoring is a typical example of cost-sensitive classification. However, it is usually treated using methods that do not take into account the real financial costs associated with the lending business. In this paper, we propose a new example-dependent cost matrix for credit scoring. Furthermore, we propose an algorithm that introduces the example-dependent costs into a logistic regression. Using two publicly available datasets, we compare our proposed method against state-of-the-art example-dependent cost-sensitive algorithms. The results highlight the importance of using real financial costs. Moreover, by using the proposed cost-sensitive logistic regression, significant improvements are made in the sense of higher savings. [less ▲] Detailed reference viewed: 247 (9 UL)![]() Aouada, Djamila ![]() ![]() in 11th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS'14) (2014) We address the privacy concerns that raise when running a nearest neighbor (NN) search on confidential data in a surveillance system composed of a client and a server. The proposed privacy preserving NN ... [more ▼] We address the privacy concerns that raise when running a nearest neighbor (NN) search on confidential data in a surveillance system composed of a client and a server. The proposed privacy preserving NN search uses Boneh-Goh-Nissim encryption to hide both the query data captured by the client and the database records stored in the server. As opposed to state–of–the–art approaches which rely on a large number of interactions, this encryption enables the client to fully outsource the NN computation to the server; hence, ensuring a single-sided private computation, and resulting in a one–round protocol between the server and the client. We analyze the practical feasibility of this algorithm on a face recognition problem. We formally prove and experimentally show that the resulting system maintains the recognition rate while fully preserving the privacy of both the database and the acquired faces. [less ▲] Detailed reference viewed: 173 (8 UL)![]() Afzal, Hassan ![]() ![]() in 22nd International Conference on Pattern Recognition (ICPR'14) (2014) One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We ... [more ▼] One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We propose BAICP+ which combines Bundle Adjustment (BA) and Iterative Closest Point (ICP) algorithms to take into account both 2D visual and 3D shape information in one minimization formulation to estimate relative pose parameters of each camera. BAICP+ is generic enough to take different types of visual features into account and can be easily adapted to varying quality of 2D and 3D data. We perform experiments on real and simulated data. Results show that with the right weighting factor BAICP+ has an optimal performance when compared to BA and ICP used independently or sequentially. [less ▲] Detailed reference viewed: 308 (34 UL)![]() Aouada, Djamila ![]() ![]() in 11th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS'14) (2014) We address the limitation of low resolution depth cameras in the context of face recognition. Considering a face as a surface in 3-D, we reformulate the recently proposed Upsampling for Precise ... [more ▼] We address the limitation of low resolution depth cameras in the context of face recognition. Considering a face as a surface in 3-D, we reformulate the recently proposed Upsampling for Precise Super–Resolution algorithm as a new approach on three dimensional points. This reformulation allows an efficient implementation, and leads to a largely enhanced 3-D face reconstruction. Moreover, combined with a dedicated face detection and representation pipeline, the proposed method provides an improved face recognition system using low resolution depth cameras. We show experimentally that this system increases the face recognition rate as compared to directly using the low resolution raw data. [less ▲] Detailed reference viewed: 248 (31 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in Proceedings of the fourteenth SIAM International Conference on Data Mining, Philadelphia, Pennsylvania, USA, April 24-26, 2014. (2014) Previous analysis has shown that applying Bayes minimum risk to detect credit card fraud leads to better results measured by monetary savings, as compared with traditional methodologies. Nevertheless ... [more ▼] Previous analysis has shown that applying Bayes minimum risk to detect credit card fraud leads to better results measured by monetary savings, as compared with traditional methodologies. Nevertheless, this approach requires good probability estimates that not only separate well between positive and negative examples, but also assess the real probability of the event. Unfortunately, not all classification algorithms satisfy this restriction. In this paper, two different methods for calibrating probabilities are evaluated and analyzed in the context of credit card fraud detection, with the objective of finding the model that minimizes the real losses due to fraud. Even though under-sampling is often used in the context of classification with unbalanced datasets, it is shown that when probabilistic models are used to make decisions based on minimizing risk, using the full dataset provides significantly better results. In order to test the algorithms, a real dataset provided by a large European card processing company is used. It is shown that by calibrating the probabilities and then using Bayes minimum Risk the losses due to fraud are reduced. Furthermore, because of the good overall results, the aforementioned card processing company is currently incorporating the methodology proposed in this paper into their fraud detection system. Finally, the methodology has been tested on a different application, namely, direct marketing. [less ▲] Detailed reference viewed: 525 (34 UL)![]() Afzal, Hassan ![]() ![]() ![]() in Second International Conference on 3D Vision (2014) In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of ... [more ▼] In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use pre-computed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise. [less ▲] Detailed reference viewed: 479 (64 UL)![]() Al Ismaeil, Kassem ![]() ![]() in 20th International Conference on Image Processing (2013, September) We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a ... [more ▼] We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion, thus clearly outperforming any existing super-resolution techniques that perform poorly on depth data and are either restricted to global motions or not precise because of an implicit estimation of motion. Our proposed approach is based on a new data model that leads to a robust registration of all depth frames after a dense upsampling. The texture-less nature of depth images allows to robustly handle sequences with multiple moving objects as confirmed by our experiments. [less ▲] Detailed reference viewed: 261 (23 UL)![]() Al Ismaeil, Kassem ![]() ![]() in Computer Analysis of Images and Patterns, 15th International Conference, CAIP 2013, York, UK, August 27-29, 2013, Proceedings, Part II (2013) Detailed reference viewed: 246 (15 UL)![]() Al Ismaeil, Kassem ![]() ![]() in 8th International Symposium on Image and Signal Processing and Analysis (2013) A critical step in multi-frame super-resolution is the registration of frames based on their motion. We improve the performance of current state-of-the-art super-resolution techniques by proposing a more ... [more ▼] A critical step in multi-frame super-resolution is the registration of frames based on their motion. We improve the performance of current state-of-the-art super-resolution techniques by proposing a more robust and accurate registration as early as in the initialization stage of the high resolution estimate. Indeed, we solve the limitations on scale and motion inherent to the classical Shift & Add approach by upsampling the low resolution frames up to the super-resolution factor prior to estimating motion or to median filtering. This is followed by an appropriate selective optimization, leading to an enhanced Shift & Add. Quantitative and qualitative evaluations have been conducted at two levels; the initial estimation and the final optimized super-resolution. Results show that the proposed algorithm outperforms existing state-of-art methods. [less ▲] Detailed reference viewed: 191 (16 UL)![]() Garcia Becerro, Frederic ![]() ![]() in IET Computer Vision (2013), 7(5), 335345 Detailed reference viewed: 319 (23 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in Proceedings of 12th International Conference on Machine Learning and Applications, ICMLA 2013 (2013), 1 Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card ... [more ▼] Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card fraud detection algorithms miss to include the real costs of credit card fraud as a measure to evaluate algorithms. In this paper a new comparison measure that realistically represents the monetary gains and losses due to fraud detection is proposed. Moreover, using the proposed cost measure a cost sensitive method based on Bayes minimum risk is presented. This method is compared with state of the art algorithms and shows improvements up to 23% measured by cost. The results of this paper are based on real life transactional data provided by a large European card processing company. [less ▲] Detailed reference viewed: 920 (35 UL)![]() Garcia Becerro, Frederic ![]() ![]() ![]() in Computer Vision – ECCV 2012. Workshops and Demonstrations (2012) Detailed reference viewed: 266 (19 UL)![]() Garcia Becerro, Frederic ![]() ![]() in IEEE Journal of Selected Topics in Signal Processing (2012), 6(5), 1-12 Detailed reference viewed: 236 (6 UL) |
||