References of "Verbeek, J. J."
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailSelf-organizing mixture models
Verbeek, J. J.; Vlassis, Nikos UL; Krose, B. J. A.

in Neurocomputing (2005), 63

We present an expectation-maximization (EM) algorithm that yields topology preserving maps of data based on probabilistic mixture models. Our approach is applicable to any mixture model for which we have ... [more ▼]

We present an expectation-maximization (EM) algorithm that yields topology preserving maps of data based on probabilistic mixture models. Our approach is applicable to any mixture model for which we have a normal EM algorithm. Compared to other mixture model approaches to self-organizing maps (SOMs), the function our algorithm maximizes has a clear interpretation: it sums data log-likelihood and a penalty term that enforces self-organization. Our approach allows principled handling of missing data and learning of mixtures of SOMs. We present example applications illustrating our approach for continuous, discrete, and mixed discrete and continuous data. (C) 2004 Elsevier B.V. All rights reserved. [less ▲]

Detailed reference viewed: 102 (0 UL)
Full Text
Peer Reviewed
See detailNon-linear CCA and PCA by Alignment of Local Models
Verbeek, J. J.; Roweis, S. T.; Vlassis, Nikos UL

in Advances in Neural Information Processing Systems 16 (2004)

We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends ... [more ▼]

We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends recent methods for non-linear dimensionality reduction to the case where multiple embeddings of the same underlying low dimensional coordinates are observed, each lying on a different high dimensional manifold. [less ▲]

Detailed reference viewed: 51 (1 UL)
Full Text
Peer Reviewed
See detailEfficient Greedy Learning of Gaussian Mixture Models
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Neural Computation (2003), 15(2), 469-485

This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the ... [more ▼]

This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided. [less ▲]

Detailed reference viewed: 86 (0 UL)
Full Text
Peer Reviewed
See detailSelf-Organization by Optimizing Free-Energy
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B. J. A.

in Proc. of European Symposium on Artificial Neural Networks (2003)

We present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen's Self-Organizing Map algorithm and not limited to Gaussian mixtures ... [more ▼]

We present a variational Expectation-Maximization algorithm to learn probabilistic mixture models. The algorithm is similar to Kohonen's Self-Organizing Map algorithm and not limited to Gaussian mixtures. We maximize the variational free-energy that sums data log-likelihood and Kullback-Leibler divergence between a normalized neighborhood function and the posterior distribution on the components, given data. We illustrate the algorithm with an application on word clustering. [less ▲]

Detailed reference viewed: 68 (0 UL)
Peer Reviewed
See detailFast nonlinear dimensionality reduction with topology representing networks
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Proc. Europ. Symp. on Artificial Neural Networks (2002)

Detailed reference viewed: 39 (0 UL)
Peer Reviewed
See detailFast Nonlinear Dimensionality Reduction With Topology Preserving Networks
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Proceedings of the Tenth European Symposium on Artificial Neural Networks (2002)

We present a fast alternative for the Isomap algorithm. A set of quantizers is fit to the data and a neighborhood structure based on the competitive Hebbian rule is imposed on it. This structure is used ... [more ▼]

We present a fast alternative for the Isomap algorithm. A set of quantizers is fit to the data and a neighborhood structure based on the competitive Hebbian rule is imposed on it. This structure is used to obtain low-dimensional description of the data by means of computing geodesic distances and multi dimensional scaling. The quantization allows for faster processing of the data. The speed-up as compared to Isomap is roughly quadratic in the ratio between the number of quantizers and the number of data points. [less ▲]

Detailed reference viewed: 48 (0 UL)
Full Text
Peer Reviewed
See detailA k-segments algorithm for finding principal curves
Verbeek, J. J.; Vlassis, Nikos UL; Kröse, B.

in Pattern Recognition Letters (2002), 23(8), 1009-1017

We propose an incremental method to find principal curves. Line segments are fitted and connected to form polygonal lines. New segments are inserted until a performance criterion is met. Experimental ... [more ▼]

We propose an incremental method to find principal curves. Line segments are fitted and connected to form polygonal lines. New segments are inserted until a performance criterion is met. Experimental results illustrate the performance of the method compared to other existing approaches. [less ▲]

Detailed reference viewed: 210 (0 UL)
Peer Reviewed
See detailA soft k-segments algorithm for principal curves
Verbeek, J. J.; Vlassis, Nikos UL; Krose, B.

in ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS (2001)

We propose a new method to find principal curves for data sets. The method repeats three steps until a stopping criterion is met. In the first step, k (unconnected) line segments are fitted on the data ... [more ▼]

We propose a new method to find principal curves for data sets. The method repeats three steps until a stopping criterion is met. In the first step, k (unconnected) line segments are fitted on the data. The second step connects the segments to form a polygonal line, and evaluates the quality of the resulting polygonal line. The third step inserts a new line segment. We compare the performance of our new method with other existing methods to find principal curves. [less ▲]

Detailed reference viewed: 109 (0 UL)