Journal Home > Volume 6 , Issue 1

The squelch problem of ultra-short wave communication under non-stationary noise and low Signal-to-Noise Ratio (SNR) in a complex electromagnetic environment is still challenging. To alleviate the problem, we proposed a squelch algorithm for ultra-short wave communication based on a deep neural network and the traditional energy decision method. The proposed algorithm first predicts the speech existence probability using a three-layer Gated Recurrent Unit (GRU) with the speech banding spectrum as the feature. Then it gets the final squelch result by combining the strength of the signal energy and the speech existence probability. Multiple simulations and experiments are done to verify the robustness and effectiveness of the proposed algorithm. We simulate the algorithm in three situations: the typical Amplitude Modulation (AM) and Frequency Modulation (FM) in the ultra-short wave communication under different SNR environments, the non-stationary burst-like noise environments, and the real received signal of the ultra-short wave radio. The experimental results show that the proposed algorithm performs better than the traditional squelch methods in all the simulations and experiments. In particular, the false alarm rate of the proposed squelch algorithm for non-stationary burst-like noise is significantly lower than that of traditional squelch methods.


menu
Abstract
Full text
Outline
About this article

Ultra-Short Wave Communication Squelch Algorithm Based on Deep Neural Network

Show Author's information Yuanxin Xiang1Yi Lv2( )Wenqiang Lei1Jiancheng Lv1
College of Computer Science, Sichuan University, Chengdu 610000, China
Sichuan Research Institute, Shanghai Jiao Tong University, Chengdu 610000, China

Abstract

The squelch problem of ultra-short wave communication under non-stationary noise and low Signal-to-Noise Ratio (SNR) in a complex electromagnetic environment is still challenging. To alleviate the problem, we proposed a squelch algorithm for ultra-short wave communication based on a deep neural network and the traditional energy decision method. The proposed algorithm first predicts the speech existence probability using a three-layer Gated Recurrent Unit (GRU) with the speech banding spectrum as the feature. Then it gets the final squelch result by combining the strength of the signal energy and the speech existence probability. Multiple simulations and experiments are done to verify the robustness and effectiveness of the proposed algorithm. We simulate the algorithm in three situations: the typical Amplitude Modulation (AM) and Frequency Modulation (FM) in the ultra-short wave communication under different SNR environments, the non-stationary burst-like noise environments, and the real received signal of the ultra-short wave radio. The experimental results show that the proposed algorithm performs better than the traditional squelch methods in all the simulations and experiments. In particular, the false alarm rate of the proposed squelch algorithm for non-stationary burst-like noise is significantly lower than that of traditional squelch methods.

Keywords:

squelch, Gated Recurrent Unit (GRU), ultra-short wave communication
Received: 13 July 2022 Accepted: 28 July 2022 Published: 24 November 2022 Issue date: March 2023
References(28)
[1]
Z. Wang, Application and development of civil aviation communication technology, (in Chinese), China Civil Aviat., vol. 9, no. 1, p. 231, 2020.
[2]
W. W. Wang, Voice squelch method in civil aviation VHF anti-jamming transceiver, (in Chinese), CN Patent CN112532259A, March 19, 2021.
[3]
J. Sohn, N. S. Kim, and W. Sung, A statistical model-based voice activity detection, IEEE Signal Proc. Lett., vol. 6, no. 1, pp. 1–3, 1999.
[4]
I. Cohen, Relaxed statistical model for speech enhancement and a priori SNR estimation, IEEE Trans. Speech Audio Proc., vol. 13, no. 5, pp. 870–881, 2005.
[5]
Y. B. Li, An effective integrated processing method for quieting tone and its application, (in Chinese), Telecommun. Eng., vol. 52, no. 1, pp. 54–57, 2012.
[6]
D. L. Wang and J. T. Chen, Supervised speech separation based on deep learning: An overview, IEEE/ACM Trans. Audio, Speech, Lang. Proc., vol. 26, no. 10, pp. 1702–1726, 2018.
[7]
F. G. Liu, Z. W. Zhang, and R. L. Zhou, Automatic modulation recognition based on CNN and GRU, Tsinghua Science and Technology, vol. 27, no. 2, pp. 422–431, 2022.
[8]
X. D. Tang, J. X. Guo, P. Li, and J. C. Lv, A surgical simulation system for predicting facial soft tissue deformation, Comput. Visual Media, vol. 2, no. 2, pp. 163–171, 2016.
[9]
X. L. Xu, T. Gao, Y. X. Wang, and X. L. Xuan, Event temporal relation extraction with attention mechanism and graph neural network, Tsinghua Science and Technology, vol. 27, no. 1, pp. 79–90, 2022.
[10]
B. M. Oloulade, J. L. Gao, J. M. Chen, T. F. Lyu, and R. Al-Sabri, Graph neural architecture search: A survey, Tsinghua Science and Technology, vol. 27, no. 4, pp. 692–708, 2022.
[11]
C. Y. Hou, J. W. Wu, B. Cao, and J. Fan, A deep-learning prediction model for imbalanced time series data forecasting, Big Data Mining and Analytics, vol. 4, no. 4, pp. 266–278, 2021.
[12]
S. K. Patnaik, C. N. Babu, and M. Bhave, Intelligent and adaptive web data extraction system using convolutional and long short-term memory deep learning networks, Big Data Mining and Analytics, vol. 4, no. 4, pp. 279–297, 2021.
[13]
F. Fourati and M. S. Alouini, Artificial intelligence for satellite communication: A review, Intell. Converged Netw., vol. 2, no. 3, pp. 213–243, 2021.
[14]
Y. Xu, J. Du, L. R. Dai, and C. H. Lee, An experimental study on speech enhancement based on deep neural networks, IEEE Signal Proc. Lett., vol. 21, no. 1, pp. 65–68,2014.
[15]
J. M. Valin, A hybrid DSP/Deep learning approach to real-time full-band speech enhancement, in Proc. IEEE 20th Int. Workshop on Multimedia Signal Processing, Vancouver, Canada, 2018, pp. 1–5.
[16]
X. H. Le, H. S. Chen, K. Chen, and J. Lu, DPCRN: Dual-path convolution recurrent network for single channel speech enhancement, in Proc. 22nd Annu. Conf. Int. Speech Communication Association, Brno, Czechia, 2021, pp. 2811–2815.
[17]
Z. Z. Xu, T. Jiang, C. Li, and J. C. Yu, An attention-augmented fully convolutional neural network for monaural speech enhancement, in Proc. 12th Int. Symp. Chinese Spoken Language Processing, Hong Kong, China, 2021, pp. 1–5.
[18]
Y. X. Wang and D. L. Wang, A deep neural network for time-domain signal reconstruction, in Proc. 2015 IEEE Int. Conf. Acoustics, Speech and Signal Processing, South Brisbane, Australia, 2015, pp. 4390–4394.
[19]
Y. H. Wang, W. X. Zhang, Z. Wu, X. X. Kong, Y. B. Wang, and H. X. Zhang, Noise modeling to build training sets for robust speech enhancement, Appl. Sci., vol. 12, no. 4, p. 1905, 2022.
[20]
L. M. Zhou, Y. Y. Gao, Z. L. Wang, J. W. Li, and W. B. Zhang, Complex spectral mapping with attention based convolution recurrent neural network for speech enhancement, arXiv preprint arXiv: 2104.05267, 2021.
[21]
K. Tan, B. Y. Xu, A. Kumar, E. Nachmani, and Y. Adi, SAGRNN: Self-attentive gated RNN for binaural Speaker separation with interaural cue preservation, IEEE Signal Proc. Lett., vol. 28, pp. 26–30, 2021.
[22]
J. Chung, C. Gulcehre, K. H. Cho, and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, arXiv preprint arXiv:1412.3555, 2014.
[23]
A. Hossan, S. Memon, and M. A. Gregory, A novel approach for MFCC feature extraction, in Proc. 4th Int. Conf. Signal Processing and Communication Systems, Gold Coast, Australia, 2010, pp. 1–5.
[24]
J. O. Smith III, Spectral Audio Signal Processing. Stanford, CA, USA: W3K Publishing, 2011.
[25]
J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, and V. Zue, TIMIT Acoustic-Phonetic Continuous Speech Corpus LDC93S1, https://catalog.ldc.upenn.edu/LDC93s1,2022.
[26]
N. Chanchaochai, C. Cieri, J. Debrah, H. W. Ding, Y. Jiang, S. S. Liao, M. Liberman, J. Wright, J. H. Yuan, J. H. Zhan, et al., GlobalTIMIT: Acoustic-phonetic datasets for the world’s languages, in Proc. 19th Annu. Conf. Int. Speech Communication Association, Hyderabad, India, 2018, pp. 192–196.
[27]
J. Ramírez, J. C. Segura, C. Benítez, Á. De La Torre, and A. Rubio, Efficient voice activity detection algorithms using long-term speech information, Speech Commun., vol. 42, nos. 3&4, pp. 271–287, 2004.
DOI
[28]
D. Ghosh, R. Muralishankar, and S. Gurugopinath, Robust voice activity detection using frequency domain long-term differential entropy, in Proc. 19th Annu. Conf. Int. Speech Communication Association, Hyderabad, India, 2018, pp. 1220–1224.
Publication history
Copyright
Rights and permissions

Publication history

Received: 13 July 2022
Accepted: 28 July 2022
Published: 24 November 2022
Issue date: March 2023

Copyright

© The author(s) 2023.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return