AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (965.7 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access | Just Accepted

Application of an Improved RepVGG Algorithm to DAS-Based Train Operation Detection

Zhongsheng Jiang1,2Zhouchang Hu1,2Shuang Yang1,2Xingrong Jiang2,3Yang Zhang1,2Yuquan Tang1,2( )Saifullah Jamali2,4Zhirong Zhang1,2,3Nek Muhammad Shaikh4Baddar ul ddin Jamali5

1 University of Science and Technology of China, Hefei 230026, China

2 Anhui Provincial Key Laboratory of Photonic Devices and Materials, Anhui Institute of Optics and Fine Mechanics, HFIPS, Chinese Academy of Sciences, Hefei 230031, China

3 School of Advanced Manufacturing Engineering, Hefei University, Hefei 230601, Anhui, China

4 Institute of Physics, University of Sindh, Jamshoro 71000, Pakistan

5 Dr. M.A. Kazi Institute of Chemistry, University of Sindh, Jamshoro 71000, Pakistan

Show Author Information

Abstract

Train operation monitoring based on distributed acoustic sensing (DAS) has attracted increasing attention for its potential to ensure railway safety and support intelligent transportation systems. However, accurately recognizing train vibration signals and distinguishing them from noise remains a significant challenge. This study proposes a classification-model-based framework to identify both the train direction of travel and interfering noise signals. The approach transforms orthogonally-demodulated vibration power spectrum features into image representations, where conventional image-processing techniques are first applied to extract candidate vibration regions. A deep-learning-based classifier is then employed to perform the final recognition. To improve discriminative feature modeling, we adopt the RepVGG architecture with structural re-parameterization and further integrate a lightweight Normalization-based Attention Module (NAM). This design enhances the model’s ability to capture direction-of-travel cues while suppressing redundant information, thereby reducing false positives and false negatives. The dataset is randomly divided into training and testing subsets with an 80/20 ratio. Four models including VGG16, ResNet18, RepVGG, and the proposed RepVGG-NAM are trained and evaluated on this dataset. Experimental results demonstrate that RepVGG-NAM achieves superior classification performance, attaining an accuracy of 97.45% on the test set, which corresponds to absolute improvements of 0.2%, 0.3%, and 0.2% over VGG16, ResNet18, and RepVGG, respectively. These consistent improvements verify the effectiveness of NAM in dynamically calibrating channel weights to highlight salient features. Overall, the proposed RepVGG-NAM framework provides a high-accuracy and computationally efficient solution for DAS-based train vibration signal classification, contributing to the advancement of intelligent railway monitoring systems.

References

【1】
【1】
 
 
Lifeline Emergency and Safety

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Jiang Z, Hu Z, Yang S, et al. Application of an Improved RepVGG Algorithm to DAS-Based Train Operation Detection. Lifeline Emergency and Safety, 2026, https://doi.org/10.26599/LLES.2025.9660014

279

Views

10

Downloads

0

Crossref

Received: 29 September 2025
Revised: 18 November 2025
Accepted: 28 November 2025
Available online: 27 January 2026

© The author(s) 2026

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).