Results 61-80 of 92.
![]() Demisse, Girum ![]() ![]() ![]() in 22nd IEEE International Conference on Image Processing (2015) A statistical model for shapes in $\mathbb{R}^2$ or $\mathbb{R}^3$ is proposed. Shape modelling is a difficult problem mainly due to the non-linear nature of its space. Our approach considers curves as ... [more ▼] A statistical model for shapes in $\mathbb{R}^2$ or $\mathbb{R}^3$ is proposed. Shape modelling is a difficult problem mainly due to the non-linear nature of its space. Our approach considers curves as shape contours, and models their deformations with respect to a deformable template shape. Contours are uniformly sampled into a discrete sequence of points. Hence, the deformation of a shape is formulated as an action of transformation matrices on each of these points. A parametrized stochastic model based on Markov process is proposed to model shape variability in the deformation space. The model's parameters are estimated from a labeled training dataset. Moreover, a similarity metric based on the Mahalanobis distance is proposed. Subsequently, the model has been successfully tested for shape recognition, synthesis, and retrieval. [less ▲] Detailed reference viewed: 251 (52 UL)![]() ; Aouada, Djamila ![]() ![]() in arXiv preprint arXiv:1505.04637 (2015) Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard ... [more ▼] Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. In previous works, some methods that take into account the financial costs into the training of different algorithms have been proposed, with the example-dependent cost-sensitive decision tree algorithm being the one that gives the highest savings. In this paper we propose a new framework of ensembles of example-dependent cost-sensitive decision-trees. The framework consists in creating different example-dependent cost-sensitive decision trees on random subsamples of the training set, and then combining them using three different combination approaches. Moreover, we propose two new cost-sensitive combination approaches; cost-sensitive weighted voting and cost-sensitive stacking, the latter being based on the cost-sensitive logistic regression method. Finally, using five different databases, from four real-world applications: credit card fraud detection, churn modeling, credit scoring and direct marketing, we evaluate the proposed method against state-of-the-art example-dependent cost-sensitive techniques, namely, cost-proportionate sampling, Bayes minimum risk and cost-sensitive decision trees. The results show that the proposed algorithms have better results for all databases, in the sense of higher savings. [less ▲] Detailed reference viewed: 94 (12 UL)![]() Afzal, Hassan ![]() ![]() in 16th International Conference on Computer Analysis of Images and Patterns (2015) In this paper, we target enhanced 3D reconstruction of non-rigidly deforming objects based on a view-independent surface representation with an automated recursive filtering scheme. This work improves ... [more ▼] In this paper, we target enhanced 3D reconstruction of non-rigidly deforming objects based on a view-independent surface representation with an automated recursive filtering scheme. This work improves upon the KinectDeform algorithm which we recently proposed. KinectDeform uses an implicit viewdependent volumetric truncated signed distance function (TSDF) based surface representation. The view-dependence makes its pipeline complex by requiring surface prediction and extraction steps based on camera’s field of view. This paper proposes to use an explicit projection-based Moving Least Squares (MLS) surface representation from point-sets. Moreover, the empirical weighted filtering scheme in KinectDeform is replaced by an automated fusion scheme based on a Kalman filter. We analyze the performance of the proposed algorithm both qualitatively and quantitatively and show that it is able to produce enhanced and feature preserving 3D reconstructions. [less ▲] Detailed reference viewed: 324 (34 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in 2014 13th International Conference on Machine Learning and Applications (2014, December 03) Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. Credit scoring is a typical example of cost ... [more ▼] Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. Credit scoring is a typical example of cost-sensitive classification. However, it is usually treated using methods that do not take into account the real financial costs associated with the lending business. In this paper, we propose a new example-dependent cost matrix for credit scoring. Furthermore, we propose an algorithm that introduces the example-dependent costs into a logistic regression. Using two publicly available datasets, we compare our proposed method against state-of-the-art example-dependent cost-sensitive algorithms. The results highlight the importance of using real financial costs. Moreover, by using the proposed cost-sensitive logistic regression, significant improvements are made in the sense of higher savings. [less ▲] Detailed reference viewed: 194 (8 UL)![]() Aouada, Djamila ![]() ![]() in 11th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS'14) (2014) We address the privacy concerns that raise when running a nearest neighbor (NN) search on confidential data in a surveillance system composed of a client and a server. The proposed privacy preserving NN ... [more ▼] We address the privacy concerns that raise when running a nearest neighbor (NN) search on confidential data in a surveillance system composed of a client and a server. The proposed privacy preserving NN search uses Boneh-Goh-Nissim encryption to hide both the query data captured by the client and the database records stored in the server. As opposed to state–of–the–art approaches which rely on a large number of interactions, this encryption enables the client to fully outsource the NN computation to the server; hence, ensuring a single-sided private computation, and resulting in a one–round protocol between the server and the client. We analyze the practical feasibility of this algorithm on a face recognition problem. We formally prove and experimentally show that the resulting system maintains the recognition rate while fully preserving the privacy of both the database and the acquired faces. [less ▲] Detailed reference viewed: 145 (5 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in Proceedings of the fourteenth SIAM International Conference on Data Mining, Philadelphia, Pennsylvania, USA, April 24-26, 2014. (2014) Previous analysis has shown that applying Bayes minimum risk to detect credit card fraud leads to better results measured by monetary savings, as compared with traditional methodologies. Nevertheless ... [more ▼] Previous analysis has shown that applying Bayes minimum risk to detect credit card fraud leads to better results measured by monetary savings, as compared with traditional methodologies. Nevertheless, this approach requires good probability estimates that not only separate well between positive and negative examples, but also assess the real probability of the event. Unfortunately, not all classification algorithms satisfy this restriction. In this paper, two different methods for calibrating probabilities are evaluated and analyzed in the context of credit card fraud detection, with the objective of finding the model that minimizes the real losses due to fraud. Even though under-sampling is often used in the context of classification with unbalanced datasets, it is shown that when probabilistic models are used to make decisions based on minimizing risk, using the full dataset provides significantly better results. In order to test the algorithms, a real dataset provided by a large European card processing company is used. It is shown that by calibrating the probabilities and then using Bayes minimum Risk the losses due to fraud are reduced. Furthermore, because of the good overall results, the aforementioned card processing company is currently incorporating the methodology proposed in this paper into their fraud detection system. Finally, the methodology has been tested on a different application, namely, direct marketing. [less ▲] Detailed reference viewed: 456 (34 UL)![]() Afzal, Hassan ![]() ![]() in 22nd International Conference on Pattern Recognition (ICPR'14) (2014) One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We ... [more ▼] One of the most crucial requirements for building a multi-view system is the estimation of relative poses of all cameras. An approach tailored for a RGB-D cameras based multi-view system is missing. We propose BAICP+ which combines Bundle Adjustment (BA) and Iterative Closest Point (ICP) algorithms to take into account both 2D visual and 3D shape information in one minimization formulation to estimate relative pose parameters of each camera. BAICP+ is generic enough to take different types of visual features into account and can be easily adapted to varying quality of 2D and 3D data. We perform experiments on real and simulated data. Results show that with the right weighting factor BAICP+ has an optimal performance when compared to BA and ICP used independently or sequentially. [less ▲] Detailed reference viewed: 272 (34 UL)![]() Aouada, Djamila ![]() ![]() in 11th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS'14) (2014) We address the limitation of low resolution depth cameras in the context of face recognition. Considering a face as a surface in 3-D, we reformulate the recently proposed Upsampling for Precise ... [more ▼] We address the limitation of low resolution depth cameras in the context of face recognition. Considering a face as a surface in 3-D, we reformulate the recently proposed Upsampling for Precise Super–Resolution algorithm as a new approach on three dimensional points. This reformulation allows an efficient implementation, and leads to a largely enhanced 3-D face reconstruction. Moreover, combined with a dedicated face detection and representation pipeline, the proposed method provides an improved face recognition system using low resolution depth cameras. We show experimentally that this system increases the face recognition rate as compared to directly using the low resolution raw data. [less ▲] Detailed reference viewed: 211 (31 UL)![]() Afzal, Hassan ![]() ![]() ![]() in Second International Conference on 3D Vision (2014) In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of ... [more ▼] In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use pre-computed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise. [less ▲] Detailed reference viewed: 444 (63 UL)![]() Al Ismaeil, Kassem ![]() ![]() in 20th International Conference on Image Processing (2013, September) We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a ... [more ▼] We enhance the resolution of depth videos acquired with low resolution time-of-flight cameras. To that end, we propose a new dedicated dynamic super-resolution that is capable to accurately super-resolve a depth sequence containing one or multiple moving objects without strong constraints on their shape or motion, thus clearly outperforming any existing super-resolution techniques that perform poorly on depth data and are either restricted to global motions or not precise because of an implicit estimation of motion. Our proposed approach is based on a new data model that leads to a robust registration of all depth frames after a dense upsampling. The texture-less nature of depth images allows to robustly handle sequences with multiple moving objects as confirmed by our experiments. [less ▲] Detailed reference viewed: 224 (23 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() ![]() in 12th International Conference on Machine Learning and Applications (2013) Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card ... [more ▼] Credit card fraud is a growing problem that affects card holders around the world. Fraud detection has been an interesting topic in machine learning. Nevertheless, current state of the art credit card fraud detection algorithms miss to include the real costs of credit card fraud as a measure to evaluate algorithms. In this paper a new comparison measure that realistically represents the monetary gains and losses due to fraud detection is proposed. Moreover, using the proposed cost measure a cost sensitive method based on Bayes minimum risk is presented. This method is compared with state of the art algorithms and shows improvements up to 23% measured by cost. The results of this paper are based on real life transactional data provided by a large European card processing company. [less ▲] Detailed reference viewed: 310 (25 UL)![]() Correa Bahnsen, Alejandro ![]() ![]() in International Conference on Machine Learning and Applications, 2013 IEEE 12th (2013) Detailed reference viewed: 188 (6 UL)![]() Al Ismaeil, Kassem ![]() ![]() in 8th International Symposium on Image and Signal Processing and Analysis (2013) A critical step in multi-frame super-resolution is the registration of frames based on their motion. We improve the performance of current state-of-the-art super-resolution techniques by proposing a more ... [more ▼] A critical step in multi-frame super-resolution is the registration of frames based on their motion. We improve the performance of current state-of-the-art super-resolution techniques by proposing a more robust and accurate registration as early as in the initialization stage of the high resolution estimate. Indeed, we solve the limitations on scale and motion inherent to the classical Shift & Add approach by upsampling the low resolution frames up to the super-resolution factor prior to estimating motion or to median filtering. This is followed by an appropriate selective optimization, leading to an enhanced Shift & Add. Quantitative and qualitative evaluations have been conducted at two levels; the initial estimation and the final optimized super-resolution. Results show that the proposed algorithm outperforms existing state-of-art methods. [less ▲] Detailed reference viewed: 154 (16 UL)![]() Al Ismaeil, Kassem ![]() ![]() in Computer Analysis of Images and Patterns, 15th International Conference, CAIP 2013, York, UK, August 27-29, 2013, Proceedings, Part II (2013) Detailed reference viewed: 200 (15 UL)![]() Garcia Becerro, Frederic ![]() ![]() in IET Computer Vision (2013), 7(5), 335345 Detailed reference viewed: 267 (23 UL)![]() Garcia Becerro, Frederic ![]() ![]() in 19th IEEE International Conference on Image Processing (2012) Detailed reference viewed: 144 (5 UL)![]() Garcia Becerro, Frederic ![]() ![]() in IEEE Journal of Selected Topics in Signal Processing (2012), 6(5), 1-12 Detailed reference viewed: 179 (5 UL)![]() Al Ismaeil, Kassem ![]() ![]() in Pattern Recognition (ICPR), 2012 21st International Conference on (2012) The well-known bilateral filter is used to smooth noisy images while keeping their edges. This filter is commonly used with Gaussian kernel functions without real justification. The choice of the kernel ... [more ▼] The well-known bilateral filter is used to smooth noisy images while keeping their edges. This filter is commonly used with Gaussian kernel functions without real justification. The choice of the kernel functions has a major effect on the filter behavior. We propose to use exponential kernels with L1 distances instead of Gaussian ones. We derive Stein's Unbiased Risk Estimate to find the optimal parameters of the new filter and compare its performance with the conventional one. We show that this new choice of the kernels has a comparable smoothing effect but with sharper edges due to the faster, smoothly decaying kernels. [less ▲] Detailed reference viewed: 124 (11 UL)![]() Garcia Becerro, Frederic ![]() ![]() ![]() in Computer Vision – ECCV 2012. Workshops and Demonstrations (2012) Detailed reference viewed: 196 (18 UL)![]() ; Schaffer, Peter ![]() ![]() in Workshop on Privacy in the Electronic Society (WPES) (2011) Detailed reference viewed: 50 (1 UL) |
||