AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

LSTM-KAN: Revolutionizing Indoor Visible Light Localization with Robust Sequence Learning

Faculty of Data Science, City University of Macao, Macao 999078, China
Key Laboratory of Computing Power Network and Information Security of Ministry of Education, Shandong Computer Science Center, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
Computer Center, Peking University, Beijing 100871, China
School of Computer Science, University of Technology Sydney, Ultimo 2007, Australia
College of Engineering and Science, Victoria University, Melbourne, AU 3011, Australia
Show Author Information

Abstract

Indoor navigation systems are gaining traction due to their resistance to electromagnetic interference, abundant spectrum resources, and energy efficiency, underscoring the importance of indoor visible light positioning technology. Recent research focuses on using deep learning to enhance positioning accuracy, yet challenges remain in training costs, model efficiency, and performance in low Signal-to-Noise Ratio (SNR) scenarios. To address these issues, we propose a novel Long Short Term Memory network-Convolution Residual Network (LSTM-CRN) algorithm with a dataset construction method based on pilot extraction. Additionally, we introduce the Kolmogorov-Arnold Network (KAN) to improve accuracy under low SNR conditions. Extensive simulation results show that the network model trained on the dataset constructed by the pilot extraction method has higher localization efficiency and accuracy, especially compared with the network model trained directly using the received data to construct the dataset. The LSTM-KAN algorithm is trained on the dataset constructed by our method in this paper, and its average localization accuracy is verified to be 3.8 cm (SNR = 30). It also shows better localization accuracy, efficiency, and real-time performance than existing mainstream methods under different SNR conditions, proving that this method is the state-of-the-art in the system described in this article.

References

【1】
【1】
 
 
Big Data Mining and Analytics
Pages 1245-1260

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Yu Y, Zhao D, Chen J, et al. LSTM-KAN: Revolutionizing Indoor Visible Light Localization with Robust Sequence Learning. Big Data Mining and Analytics, 2025, 8(6): 1245-1260. https://doi.org/10.26599/BDMA.2025.9020021

1462

Views

85

Downloads

3

Crossref

3

Web of Science

3

Scopus

0

CSCD

Received: 23 September 2024
Revised: 19 November 2024
Accepted: 10 February 2025
Published: 19 September 2025
© The author(s) 2025.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).