Open Access Issue
Survey on Lie Group Machine Learning
Big Data Mining and Analytics 2020, 3 (4): 235-258
Published: 16 November 2020
Abstract PDF (1.3 MB) Collect

Lie group machine learning is recognized as the theoretical basis of brain intelligence, brain learning, higher machine learning, and higher artificial intelligence. Sample sets of Lie group matrices are widely available in practical applications. Lie group learning is a vibrant field of increasing importance and extraordinary potential and thus needs to be developed further. This study aims to provide a comprehensive survey on recent advances in Lie group machine learning. We introduce Lie group machine learning techniques in three major categories: supervised Lie group machine learning, semisupervised Lie group machine learning, and unsupervised Lie group machine learning. In addition, we introduce the special application of Lie group machine learning in image processing. This work covers the following techniques: Lie group machine learning model, Lie group subspace orbit generation learning, symplectic group learning, quantum group learning, Lie group fiber bundle learning, Lie group cover learning, Lie group deep structure learning, Lie group semisupervised learning, Lie group kernel learning, tensor learning, frame bundle connection learning, spectral estimation learning, Finsler geometric learning, homology boundary learning, category representation learning, and neuromorphic synergy learning. Overall, this survey aims to provide an insightful overview of state-of-the-art development in the field of Lie group machine learning. It will enable researchers to comprehensively understand the state of the field, identify the most appropriate tools for particular applications, and identify directions for future research.

Open Access Issue
Unsupervised Nonlinear Adaptive Manifold Learning for Global and Local Information
Tsinghua Science and Technology 2021, 26 (2): 163-171
Published: 24 July 2020
Abstract PDF (12.1 MB) Collect

In this paper, we propose an Unsupervised Nonlinear Adaptive Manifold Learning method (UNAML) that considers both global and local information. In this approach, we apply unlabeled training samples to study nonlinear manifold features, while considering global pairwise distances and maintaining local topology structure. Our method aims at minimizing global pairwise data distance errors as well as local structural errors. In order to enable our UNAML to be more efficient and to extract manifold features from the external source of new data, we add a feature approximate error that can be used to learn a linear extractor. Also, we add a feature approximate error that can be used to learn a linear extractor. In addition, we use a method of adaptive neighbor selection to calculate local structural errors. This paper uses the kernel matrix method to optimize the original algorithm. Our algorithm proves to be more effective when compared with the experimental results of other feature extraction methods on real face-data sets and object data sets.

Total 2