PANG, Jun ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Xu, Guandong
External co-authors :
yes
Language :
English
Title :
Hilbert Sinkhorn Divergence for Optimal Transport
Publication date :
2021
Event name :
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition - CVPR'21
Event date :
2021
Audience :
International
Main work title :
Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition - CVPR'21
J. Altschuler, F. Bach, A. Rudi, and J. Niles-Weed. Massively scalable sinkhorn distances via the nyström method. In Advances in Neural Information Processing Systems, pages 4427-4437, 2019. 1, 8
J. Altschuler, J. Niles-Weed, and P. Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In Advances in neural information processing systems, pages 1964-1974, 2017. 1
D. Alvarez-Melis and T. Jaakkola. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1881-1890, 2018. 2
R. Anirudh, V. Venkataraman, K. Natesan Ramamurthy, and P. Turaga. A riemannian framework for statistical analysis of topological persistence diagrams. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 68-76, 2016. 5, 7
U. Bauer, M. Kerber, and J. Reininghaus. Distributed computation of persistent homology. In 2014 Proceedings of the Sixteenth Workshop on Algorithm Engineering and Experiments (ALENEX), pages 31-38. SIAM, 2014. 7
A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011. 3
M. Blondel, V. Seguy, and A. Rolet. Smooth and sparse optimal transport. In International Conference on Artificial Intelligence and Statistics, pages 880-889, 2018. 1
N. Bonneel, J. Rabin, G. Peyré, and H. Pfister. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22-45, 2015. 1
C. Bunne, D. Alvarez-Melis, A. Krause, and S. Jegelka. Learning generative models across incomparable spaces. arXiv preprint arXiv:1905.05461, 2019. 2
M. Carriere, M. Cuturi, and S. Oudot. Sliced wasserstein kernel for persistence diagrams. In International Conference on Machine Learning, pages 664-673. PMLR, 2017. 5
M. Carrière, S. Y. Oudot, and M. Ovsjanikov. Stable topological signatures for points on 3d shapes. In Computer Graphics Forum, volume 34, pages 1-12. Wiley Online Library, 2015. 7
M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292-2300, 2013. 1, 2, 4
M. Cuturi and G. Peyré. A smoothed dual approach for variational wasserstein problems. SIAM Journal on Imaging Sciences, 9(1):320-343, 2016. 1
I. Deshpande, Y.-T. Hu, R. Sun, A. Pyrros, N. Siddiqui, S. Koyejo, Z. Zhao, D. Forsyth, and A. G. Schwing. Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 10648-10656, 2019. 1
I. Deshpande, Z. Zhang, and A. G. Schwing. Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3483-3491, 2018. 1
A. Dessein, N. Papadakis, and J.-L. Rouas. Regularized optimal transport and the rot mover's distance. The Journal of Machine Learning Research, 19(1):590-642, 2018. 1
R. Flamary, N. Courty, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell, 2016. 1
A. Genevay, M. Cuturi, G. Peyré, and F. Bach. Stochastic optimization for large-scale optimal transport. In Advances in neural information processing systems, pages 3440-3448, 2016. 1
A. Genevay, G. Peyré, and M. Cuturi. Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pages 1608-1617, 2018. 1
Z. Guo, L. Zhang, and D. Zhang. A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing, 19(6):1657-1663, 2010. 8
J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550-554, 1994. 6
E. Kalogerakis, A. Hertzmann, and K. Singh. Learning 3d mesh segmentation and labeling. Acm Transactions on Graphics, 29(4):1-12, 2010. 7
S. Kolouri, K. Nadjahi, U. Simsekli, R. Badeau, and G. Rohde. Generalized sliced wasserstein distances. In Advances in Neural Information Processing Systems, pages 261-272, 2019. 1
S. Kolouri, Y. Zou, and G. K. Rohde. Sliced wasserstein kernels for probability distributions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5258-5267, 2016. 1
G. Kusano, Y. Hiraoka, and K. Fukumizu. Persistence weighted gaussian kernel for topological data analysis. In International Conference on Machine Learning, pages 2004-2013, 2016. 2, 5, 6, 7
C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1995-2002, 2014. 5, 7
Q. Li, W. Niu, G. Li, Y. Cao, J. Tan, and L. Guo. Lingo: linearized grassmannian optimization for nuclear norm minimization. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 801-809, 2015. 8
Q. Li and Z. Wang. Riemannian submanifold tracking on low-rank algebraic variety. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017. 8
T. Lin, Z. Hu, and X. Guo. Sparsemax and relaxed wasserstein for topic sparsity. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 141-149, 2019. 1
K. Muandet, K. Fukumizu, B. Sriperumbudur, and B. Schölkopf. Kernel mean embedding of distributions: A review and beyond. arXiv preprint arXiv:1605.09522, 2016. 2
J. Niles-Weed and P. Rigollet. Estimation of wasserstein distances in the spiked transport model. arXiv preprint arXiv:1909.07513, 2019. 1
J. H. Oh, M. Pouryahya, A. Iyer, A. P. Apte, J. O. Deasy, and A. Tannenbaum. A novel kernel wasserstein distance on gaussian measures: An application of identifying dental artifacts in head and neck computed tomography. Computers in Biology and Medicine, page 103731, 2020. 2
T. Ojala, T. Mäenpää, M. Pietikäinen, J. Viertola, J. Kyllönen, and S. Huovinen. Outex - new framework for empirical evaluation of texture analysis algorithms. In International Conference on Pattern Recognition, 2002. Proceedings, pages 701-706 vol.1, 2002. 8
F.-P. Paty and M. Cuturi. Subspace robust wasserstein distances. arXiv preprint arXiv:1901.08949, 2019. 1
G. Peyré, L. Chizat, F.-X. Vialard, and J. Solomon. Quantum entropic regularization of matrix-valued optimal transport. European Journal of Applied Mathematics, 30(6):1079-1102, 2019. 1
G. Peyré, M. Cuturi, et al. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355-607, 2019. 2, 4
G. Peyré, M. Cuturi, and J. Solomon. Gromov-wasserstein averaging of kernel and distance matrices. In International Conference on Machine Learning, pages 2664-2672, 2016. 2
D. Pickup, X. Sun, P. L. Rosin, R. R. Martin, Z. Cheng, Z. Lian, M. Aono, A. B. Hamza, A. Bronstein, and M. Bronstein. Shape retrieval of non-rigid 3d human models. In Eurographics Workshop on 3d Object Retrieval, pages 101-110, 2014. 7
J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt. A stable multi-scale kernel for topological machine learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4741-4748, 2015. 5, 7, 8
F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94, 2015. 1
G. Schiebinger, J. Shu, M. Tabaka, B. Cleary, V. Subramanian, A. Solomon, J. Gould, S. Liu, S. Lin, P. Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. Cell, 176(4):928-943, 2019. 1
M. A. Schmitz, M. Heitz, N. Bonneel, F. Ngole, D. Coeurjolly, M. Cuturi, G. Peyré, and J.-L. Starck. Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning. SIAM Journal on Imaging Sciences, 11(1):643-678, 2018. 1
D. Sejdinovic, H. Strathmann, M. L. Garcia, C. Andrieu, and A. Gretton. Kernel adaptive metropolis-hastings. In International conference on machine learning, pages 1665-1673, 2014. 2
A. Smola, A. Gretton, L. Song, and B. Schölkopf. A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13-31. Springer, 2007. 3
J. Solomon, F. De Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (TOG), 34(4):1-11, 2015. 2
J. Solomon, G. Peyré, V. G. Kim, and S. Sra. Entropic metric alignment for correspondence problems. ACM Transactions on Graphics (TOG), 35(4):1-13, 2016. 2
A. Som, K. Thopalli, K. N. Ramamurthy, V. Venkataraman, A. Shukla, and P. Turaga. Perturbation robust representations of topological persistence diagrams. In Proceedings of the European Conference on Computer Vision (ECCV), pages 617-635, 2018. 5
B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. Lanckriet. Hilbert space embeddings and metrics on probability measures. The Journal of Machine Learning Research, 11:1517-1561, 2010. 2, 3
J. Sun, M. Ovsjanikov, and L. Guibas. A concise and provably informative multi-scale signature based on heat diffusion. In Computer graphics forum, volume 28, pages 1383-1392. Wiley Online Library, 2009. 7
V. Titouan, R. Flamary, N. Courty, R. Tavenard, and L. Chapel. Sliced gromov-wasserstein. In Advances in Neural Information Processing Systems, pages 14753-14763, 2019. 2
C. Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003. 1
C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. 1
Z. Wang, Q. Li, G. Li, and G. Xu. Polynomial representation for persistence diagram. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6123-6132, 2019. 5
Z. Wang and V. Solo. Lie group state estimation via optimal transport. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5625-5629. IEEE, 2020. 1
Z. Wang and V. Solo. Particle filtering on the stiefel manifold with optimal transport. In 2020 59th IEEE Conference on Decision and Control (CDC), pages 4111-4116. IEEE, 2020. 1
J. Wu, Z. Huang, D. Acharya, W. Li, J. Thoma, D. P. Paudel, and L. V. Gool. Sliced wasserstein generative models. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3713-3722, 2019. 1
Y. Yan, W. Li, H. Wu, H. Min, M. Tan, and Q. Wu. Semi-supervised optimal transport for heterogeneous domain adaptation. In IJCAI, pages 2969-2975, 2018. 2
Z. Zhang, M. Wang, and A. Nehorai. Optimal transport in reproducing kernel hilbert spaces: Theory and applications. IEEE transactions on pattern analysis and machine intelligence, 2019. 2
D.-X. Zhou. Capacity of reproducing kernel spaces in learning theory. IEEE Transactions on Information Theory, 49(7):1743-1752, 2003. 5