A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep variational information bottleneck. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
F. Alet, E. Weng, T. Lozano-Pérez, and L. P. Kaelbling. Neural relational inference with fast modular meta-learning. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS), pages 11804-11815, 2019.
M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Proceedings of the 15th International Conference on Neural Information Processing Systems (NIPS), pages 585-591, 2001.
F. H. Biase, X. Cao, and S. Zhong. Cell fate inclination within 2-cell and 4-cell mouse embryos revealed by single-cell rna sequencing. Genome Research, 24(11):1787-1796, 2014.
G. Brasó and L. Leal-Taixé. Learning a neural solver for multiple object tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6247-6257, 2020.
J. Casadiego, M. Nitzan, S. Hallerberg, and M. Timme. Model-free inference of direct network interactions from nonlinear collective dynamics. Nature Communications, 8, 12 2017.
T. E. Chan, M. P. Stumpf, and A. C. Babtie. Gene regulatory network inference from single-cell data using multivariate information measures. Cell Systems, 5(3):251-267.e3, 2017.
P. Chaudhari, A. Oberman, S. Osher, S. Soatto, and G. Carlier. Deep relaxation: partial differential equations for optimizing deep neural networks. Research in the Mathematical Sciences, 5(3):1-30, 2018.
S. Chen, J. Wang, and G. Li. Neural relational inference with efficient message passing mechanisms. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), pages 7055-7063, 2021.
Y. Chen, L. Wu, and M. Zaki. Iterative deep graph learning for graph neural networks: Better and robust node embeddings. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS), 2020.
B. Fatemi, L. E. Asri, and S. M. Kazemi. SLAPS: self-supervision improves structure learning for graph neural networks. In Proceedings of the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), pages 22667-22681, 2021.
C. Graber and A. G. Schwing. Dynamic neural relational inference for forecasting trajectories. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 4383-4392, 2020.
S. Ha and H. Jeong. Unraveling hidden interactions in complex systems with deep learning. Scientific Reports, 11(1):1-13, 2021.
W. L. Hamilton. Graph Representation Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2020.
B. Ivanovic and M. Pavone. The Trajectron: Probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 2375-2384, 2019.
W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 66-74. ACM, 2020.
D. D. Johnson. Learning graphical state transitions. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
V. Kalofolias. How to learn a graph from smooth signals. In Artificial Intelligence and Statistics, pages 920-929. PMLR, 2016.
S. Khanna and V. Y. F. Tan. Economy statistical recurrent units for inferring nonlinear granger causality. In Proceedings of the 8th International Conference on Learning Representations (ICLR), 2020.
S. Kim. Ppcor: An r package for a fast calculation to semi-partial correlation coefficients. Communications for Statistical Applications and Methods, 22:665-674, 11 2015.
D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting systems. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 2688-2697. PMLR, 2018.
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
J. Kwapień and S. Drozdz. Physical approach to complex systems. Physics Reports, 515(3): 115-226, 2012.
A. Lamere and J. Li. Leap: Constructing gene co-expression networks for single-cell rnasequencing data using pseudotime ordering. Bioinformatics, 33(5):764-766, 2016.
J. Li, F. Yang, M. Tomizuka, and C. Choi. Evolvegraph: Multi-agent trajectory prediction with dynamic relational reasoning. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.
J. Li, H. Ma, Z. Zhang, J. Li, and M. Tomizuka. Spatio-temporal graph dual-attention network for multi-agent prediction and tracking. arXiv preprint arXiv:2102.09117, 2021.
Y. Li, O. Vinyals, C. Dyer, R. Pascanu, and P. W. Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018.
Y. Li, C. Meng, C. Shahabi, and Y. Liu. Structure-informed graph auto-encoder for relational inference and simulation. In Proceedings of ICML Workshop on Learning and Reasoning with Graph-Structured Data, page 2, 2019.
S. Löwe, D. Madras, R. Zemel, and M. Welling. Amortized causal discovery: Learning to infer causal graphs from time-series data. arXiv preprint arXiv:2006.10833, 2020.
J. J. McAuley and J. Leskovec. Learning to discover social circles in ego networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), pages 548-556, 2012.
V. Moignard, S. Woodhouse, L. Haghverdi, A. J. Lilly, Y. Tanaka, A. C. Wilkinson, F. Buettner, I. C. Macaulay, W. Jawaid, E. Diamanti, et al. Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nature biotechnology, 33(3): 269-276, 2015.
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS), pages 8024-8035, 2019.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
A. Pratapa, A. P. Jalihal, J. N. Law, A. Bharadwaj, and T. Murali. Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data. Nature Methods, 17(2): 147-154, 2020.
B. C. Ross. Mutual information between discrete and continuous data sets. PLOS ONE, 9:1-5, 2014.
L. Ruthotto and E. Haber. An introduction to deep generative modeling. GAMM-Mitteilungen, page e202100008, 2021.
B. Saeed, S. Panigrahi, and C. Uhler. Causal structure discovery from distributions arising from mixtures of dags. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 8336-8345. PMLR, 2020.
W. Saelens, R. Cannoodt, H. Todorov, and Y. Saeys. A comparison of single-cell trajectory inference methods. Nature Biotechnology, 37(5):547-554, 2019.
T. Schreiber. Measuring information transfer. Physical Review Letters, 85(2):461, 2000.
R. Selvan, T. N. Kipf, M. Welling, J. H. Pedersen, J. Petersen, and M. de Bruijne. Graph refinement based tree extraction using mean-field networks and graph neural networks. arXiv preprint arXiv:1811.08674, 2018.
R. Shwartz-Ziv and N. Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich. Network modelling methods for FMRI. Neuroimage, 54(2): 875-891, 2011.
J. Tang, J. Zhang, L. Yao, J. Li, L. Zhang, and Z. Su. Arnetminer: extraction and mining of academic social networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 990-998. ACM, 2008.
A. Tank, I. Covert, N. Foti, A. Shojaie, and E. Fox. Neural Granger Causality. arXiv preprint arXiv:1802.05842, 2018.
N. Tishby and N. Zaslavsky. Deep learning and the information bottleneck principle. In Proceedings of 2015 IEEE Information Theory Workshop (ITW), pages 1-5. IEEE, 2015.
N. Tishby, F. Pereira, and W. Biale. The information bottleneck method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 368-377. IEEE, 1999.
M. Tsubaki, K. Tomii, and J. Sese. Compound-protein interaction prediction with end-to-end learning of neural networks for graphs and sequences. Bioinformatics, 35(2):309-318, 2019.
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations (ICLR), 2018.
M. J. Vowels, N. C. Camgoz, and R. Bowden. D'ya like dags? a survey on structure learning and causal discovery. arXiv preprint arXiv:2103.02582, 2021.
E. Webb, B. Day, H. Andres-Terre, and P. Lió. Factorised neural relational inference for multi-interaction systems. arXiv preprints arXiv:1905.08721, 2019.
T. Wu, T. Breuel, M. Skuhersky, and J. Kautz. Discovering nonlinear relations with minimum predictive information regularization. arXiv preprint arXiv:2001.01885, 2020.
K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019.
Y. Xue and P. Bogdan. Reconstructing missing complex networks against adversarial interventions. Nature communications, 10(1):1-12, 2019.
P. Yin, S. Zhang, J. Lyu, S. Osher, Y. Qi, and J. Xin. Binaryrelax: A relaxation approach for training deep neural networks with quantized weights. SIAM Journal on Imaging Sciences, 11 (4):2205-2223, 2018.
S.-S. Yu and W.-H. Tsai. Relaxation by the Hopfield neural network. Pattern Recognition, 25 (2):197-209, 1992.
Y. Yu, J. Chen, T. Gao, and M. Yu. DAG-GNN: DAG structure learning with graph neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 7154-7163. PMLR, 2019.
Y. Yu, T. Gao, N. Yin, and Q. Ji. DAGs with No Curl: An efficient DAG structure learning approach. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 12156-12166. PMLR, 2021.
X. Zheng, B. Aragam, P. Ravikumar, and E. P. Xing. DAGs with NO TEARS: continuous optimization for structure learning. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS), 2018.
Y. Zhu, W. Xu, J. Zhang, Q. Liu, S. Wu, and L. Wang. Deep graph structure learning for robust representations: A survey. arXiv preprint arXiv:2103.03036, 2021.