References of "Vlassis, Nikos 40021183"
     in
Bookmark and Share    
Peer Reviewed
See detailSkin detection using the EM algorithm with spatial constraints
Diplaros, A.; Gevers, T.; Vlassis, Nikos UL

in Proc. Int. Conf. on Systems, Man and Cybernetics (2004)

In this paper, we propose a color-based method for skin detection and segmentation, which also takes into account the spatial coherence of the skin pixels. We treat the problem of skin detection as an ... [more ▼]

In this paper, we propose a color-based method for skin detection and segmentation, which also takes into account the spatial coherence of the skin pixels. We treat the problem of skin detection as an inference problem. We assume that each pixel in an image has a hidden binary label associated with it, that specifies if it is skin or not. In order to solve the inference problem, we use a variational EM algorithm, which incorporates the spatial constraints with just a small computational overhead in the E-step. Finally, we show that our method provides better results than the standard EM algorithm and a state-of-art skin-detection method from the literature of Jones, M. J. and Rehg, J. M. (2002). [less ▲]

Detailed reference viewed: 59 (0 UL)
Full Text
Peer Reviewed
See detailSparse Cooperative Q-learning
Kok, Jelle R.; Vlassis, Nikos UL

in Proc. 21st Int. Conf. on Machine Learning, Banff, Canada, (2004)

Learning in multiagent systems suffers from the fact that both the state and the action space scale exponentially with the number of agents. In this paper we are interested in using Q-learning to learn ... [more ▼]

Learning in multiagent systems suffers from the fact that both the state and the action space scale exponentially with the number of agents. In this paper we are interested in using Q-learning to learn the coordinated actions of a group of cooperative agents, using a sparse representation of the joint stateaction space of the agents. We first examine a compact representation in which the agents need to explicitly coordinate their actions only in a predefined set of states. Next, we use a coordination-graph approach in which we represent the Q-values by value rules that specify the coordination dependencies of the agents at particular states. We show how Q-learning can be efficiently applied to learn a coordinated policy for the agents in the above framework. We demonstrate the proposed method on the predator-prey domain, and we compare it with other related multiagent Q-learning methods. [less ▲]

Detailed reference viewed: 75 (0 UL)
Full Text
Peer Reviewed
See detailNon-linear CCA and PCA by Alignment of Local Models
Verbeek, J. J.; Roweis, S. T.; Vlassis, Nikos UL

in Advances in Neural Information Processing Systems 16 (2004)

We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends ... [more ▼]

We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends recent methods for non-linear dimensionality reduction to the case where multiple embeddings of the same underlying low dimensional coordinates are observed, each lying on a different high dimensional manifold. [less ▲]

Detailed reference viewed: 32 (1 UL)
Peer Reviewed
See detailHousehold robots look and learn
Kröse, B.; Bunschoten, R.; ten Hagen, S. et al

in IEEE Robotics & Automation Magazine (2004), 11(4), 45-52

Detailed reference viewed: 60 (0 UL)
Full Text
Peer Reviewed
See detailThe global k-means clustering algorithm
Likas, Aristidis; Vlassis, Nikos UL; Verbeek, Jakob J.

in Pattern Recognition (2003), 36(2), 451-461

We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N ... [more ▼]

We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) executions of the k-means algorithm from suitable initial positions. We also propose modifications of the method to reduce the computational load without significantly affecting solution quality. The proposed clustering methods are tested on well-known data sets and they compare favorably to the k-means algorithm with random restarts. [less ▲]

Detailed reference viewed: 167 (1 UL)
Full Text
Peer Reviewed
See detailSelf-Organization by Optimizing Free-Energy
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B. J. A.

in Proc. of European Symposium on Artificial Neural Networks (2003)

We present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen's Self-Organizing Map algorithm and not limited to Gaussian mixtures ... [more ▼]

We present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen's Self-Organizing Map algorithm and not limited to Gaussian mixtures. We maximize the variational free-energy that sums data log-likelihood and Kullback-Leibler divergence between a normalized neighborhood function and the posterior distribution on the components, given data. We illustrate the algorithm with an application on word clustering. [less ▲]

Detailed reference viewed: 49 (0 UL)
Peer Reviewed
See detailMulti-Robot Decision Making Using Coordination Graphs
Kok, Jelle R.; Spaan, Matthijs T. J.; Vlassis, Nikos UL

in Proceedings of the International Conference on Advanced Robotics (ICAR) (2003)

Within a group of cooperating agents the decision making of an individual agent depends on the actions of the other agents. In dynamic environments, these dependencies will change rapidly as a result of ... [more ▼]

Within a group of cooperating agents the decision making of an individual agent depends on the actions of the other agents. In dynamic environments, these dependencies will change rapidly as a result of the continuously changing state. Via a context-specific decomposition of the problem into smaller subproblems, coordination graphs o#er scalable solutions to the problem of multiagent decision making. We will apply coordination graphs to the continuous domain by assigning roles to the agents and then coordinating the di#erent roles. Finally, we will demonstrate this method in the RoboCup soccer simulation domain. [less ▲]

Detailed reference viewed: 41 (0 UL)
Full Text
Peer Reviewed
See detailEfficient Greedy Learning of Gaussian Mixture Models
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Neural Computation (2003), 15(2), 469-485

This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the ... [more ▼]

This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided. [less ▲]

Detailed reference viewed: 51 (0 UL)
Full Text
Peer Reviewed
See detailA greedy EM algorithm for Gaussian mixture learning
Vlassis, Nikos UL; Likas, A.

in Neural Processing Letters (2002), 15(1), 77-87

Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter ... [more ▼]

Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture. [less ▲]

Detailed reference viewed: 31 (0 UL)
Full Text
Peer Reviewed
See detailA k-segments algorithm for finding principal curves
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Pattern Recognition Letters (2002), 23(8), 1009-1017

We propose an incremental method to find principal curves. Line segments are fitted and connected to form polygonal lines. New segments are inserted until a performance criterion is met. Experimental ... [more ▼]

We propose an incremental method to find principal curves. Line segments are fitted and connected to form polygonal lines. New segments are inserted until a performance criterion is met. Experimental results illustrate the performance of the method compared to other existing approaches. [less ▲]

Detailed reference viewed: 128 (0 UL)
Peer Reviewed
See detailFast nonlinear dimensionality reduction with topology representing networks
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Proc. Europ. Symp. on Artificial Neural Networks (2002)

Detailed reference viewed: 25 (0 UL)
Full Text
Peer Reviewed
See detailSupervised dimension reduction of intrinsically low-dimensional data
Vlassis, Nikos UL; Motomura, Y.; Kröse, B.

in Neural Computation (2002), 14(1), 191-215

High-dimensional data generated by a system with limited degrees of freedom are often constrained in low-dimensional manifolds in the original space. In this article, we investigate dimension-reduction ... [more ▼]

High-dimensional data generated by a system with limited degrees of freedom are often constrained in low-dimensional manifolds in the original space. In this article, we investigate dimension-reduction methods for such intrinsically low-dimensional data through linear projections that preserve the manifold structure of the data. For intrinsically one-dimensional data, this implies projecting to a curve on the plane with as few intersections as possible. We are proposing a supervised projection pursuit method that can be regarded as an extension of the single-index model for nonparametric regression. We show results from a toy and two robotic applications. [less ▲]

Detailed reference viewed: 65 (0 UL)
Full Text
Peer Reviewed
See detailCoordinating Principal Component Analyzers
Verbeek, Jakob J.; Vlassis, Nikos UL; Kröse, Ben J. A.

in Proc. Int. Conf. on Artificial Neural Networks, Madrid, Spain, (2002)

Mixtures of Principal Component Analyzers can be used to model high dimensional data that lie on or near a low dimensional manifold. By linearly mapping the PCA subspaces to one global low dimensional ... [more ▼]

Mixtures of Principal Component Analyzers can be used to model high dimensional data that lie on or near a low dimensional manifold. By linearly mapping the PCA subspaces to one global low dimensional space, we obtain a `global' low dimensional coordinate system for the data. As shown by Roweis et al., ensuring consistent global low-dimensional coordinates for the data can be expressed as a penalized likelihood optimization problem. We show that a restricted form of the Mixtures of Probabilistic PCA model allows for a more efficient algorithm. Experimental results are provided to illustrate the viability method. [less ▲]

Detailed reference viewed: 54 (0 UL)
Full Text
Peer Reviewed
See detailAuxiliary particle filter robot localization from high-dimensional sensor observations
Vlassis, Nikos UL; Terwijn, B.; Krose, B.

in IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS (2002)

We apply the auxiliary particle filter algorithm of Pitt and Shephard (1999) to the problem of robot localization. To deal with the high-dimensional sensor observations (images) and an unknown observation ... [more ▼]

We apply the auxiliary particle filter algorithm of Pitt and Shephard (1999) to the problem of robot localization. To deal with the high-dimensional sensor observations (images) and an unknown observation model., we propose the use of an inverted nonparametric observation model computed by nearest neighbor conditional density estimation. We show that the proposed model can lead to a fully adapted optimal filter, and is able to successfully handle image occlusion and robot kidnap. The proposed algorithm is very simple to implement and exhibits a high degree of robustness in practice. We report experiments involving robot localization from omnidirectional vision in an indoor environment. [less ▲]

Detailed reference viewed: 74 (1 UL)
Peer Reviewed
See detailFast Nonlinear Dimensionality Reduction With Topology Preserving Networks
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Proceedings of the Tenth European Symposium on Artificial Neural Networks (2002)

We present a fast alternative for the Isomap algorithm. A set of quantizers is fit to the data and a neighborhood structure based on the competitive Hebbian rule is imposed on it. This structure is used ... [more ▼]

We present a fast alternative for the Isomap algorithm. A set of quantizers is fit to the data and a neighborhood structure based on the competitive Hebbian rule is imposed on it. This structure is used to obtain low-dimensional description of the data by means of computing geodesic distances and multi dimensional scaling. The quantization allows for faster processing of the data. The speed-up as compared to Isomap is roughly quadratic in the ratio between the number of quantizers and the number of data points. [less ▲]

Detailed reference viewed: 22 (0 UL)
See detailFast score function estimation with application in ICA
Vlassis, Nikos UL

in Proc. Int. Conf. on Artificial Neural Networks (2001)

Detailed reference viewed: 52 (0 UL)
Full Text
Peer Reviewed
See detailEfficient source adaptivity in independent component analysis
Vlassis, Nikos UL; Motomura, Y.

in IEEE Transactions on Neural Networks (2001), 12(3), 559-566

A basic element in most independent component analysis (ICA) algorithms is the choice of a model for the score functions of the unknown sources. While this is usually based on approximations, for large ... [more ▼]

A basic element in most independent component analysis (ICA) algorithms is the choice of a model for the score functions of the unknown sources. While this is usually based on approximations, for large data sets it is possible to achieve "source adaptivity" by directly estimating from the data the "'true" score functions of the sources. In this paper we describe an efficient scheme for achieving this by extending the fast density estimation method of Silverman (1982), We show with a real and a synthetic experiment that our method can provide more accurate solutions than state-of-the-art methods when optimization is carried out in the vicinity of the global minimum of the contrast function. [less ▲]

Detailed reference viewed: 52 (1 UL)
Peer Reviewed
See detailJijo-2: An office robot that communicates and learns
Asoh, H.; Vlassis, Nikos UL; Motomura, Y. et al

in IEEE Intelligent Systems (2001), 16(5), 46-55

Detailed reference viewed: 72 (0 UL)
Peer Reviewed
See detailA soft k-segments algorithm for principal curves
Verbeek, J. J.; Vlassis, Nikos UL; Krose, B.

in ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS (2001)

We propose a new method to find principal curves for data sets. The method repeats three steps until a stopping criterion is met. In the first step, k (unconnected) line segments are fitted on the data ... [more ▼]

We propose a new method to find principal curves for data sets. The method repeats three steps until a stopping criterion is met. In the first step, k (unconnected) line segments are fitted on the data. The second step connects the segments to form a polygonal line, and evaluates the quality of the resulting polygonal line. The third step inserts a new line segment. We compare the performance of our new method with other existing methods to find principal curves. [less ▲]

Detailed reference viewed: 69 (0 UL)
Full Text
Peer Reviewed
See detailLearning Task-Relevant Features From Robot Data
Vlassis, Nikos UL; Bunschoten, Roland; Kröse, Ben

in IEEE International Conference on Robotics and Automation, 2001. Proceedings 2001 ICRA. (2001)

Detailed reference viewed: 45 (0 UL)