Journal Home > Volume 6 , Issue 1

Automatic speech recognition systems are developed for translating the speech signals into the corresponding text representation. This translation is used in a variety of applications like voice enabled commands, assistive devices and bots, etc. There is a significant lack of efficient technology for Indian languages. In this paper, an wavelet transformer for automatic speech recognition (WTASR) of Indian language is proposed. The speech signals suffer from the problem of high and low frequency over different times due to variation in speech of the speaker. Thus, wavelets enable the network to analyze the signal in multiscale. The wavelet decomposition of the signal is fed in the network for generating the text. The transformer network comprises an encoder decoder system for speech translation. The model is trained on Indian language dataset for translation of speech into corresponding text. The proposed method is compared with other state of the art methods. The results show that the proposed WTASR has a low word error rate and can be used for effective speech recognition for Indian language.


menu
Abstract
Full text
Outline
About this article

WTASR: Wavelet Transformer for Automatic Speech Recognition of Indian Languages

Show Author's information Tripti Choudhary1( )Vishal Goyal1Atul Bansal2
Department of Electronics and Communication, GLA University, Mathura 281406, India
Chandigarh University, Mohali 140413, India

Abstract

Automatic speech recognition systems are developed for translating the speech signals into the corresponding text representation. This translation is used in a variety of applications like voice enabled commands, assistive devices and bots, etc. There is a significant lack of efficient technology for Indian languages. In this paper, an wavelet transformer for automatic speech recognition (WTASR) of Indian language is proposed. The speech signals suffer from the problem of high and low frequency over different times due to variation in speech of the speaker. Thus, wavelets enable the network to analyze the signal in multiscale. The wavelet decomposition of the signal is fed in the network for generating the text. The transformer network comprises an encoder decoder system for speech translation. The model is trained on Indian language dataset for translation of speech into corresponding text. The proposed method is compared with other state of the art methods. The results show that the proposed WTASR has a low word error rate and can be used for effective speech recognition for Indian language.

Keywords:

transformer, wavelet, automatic speech recognition (ASR), Indian language
Received: 31 May 2022 Revised: 06 June 2022 Accepted: 21 June 2022 Published: 24 November 2022 Issue date: March 2023
References(18)
[1]
L. Deng, G. Hinton, and B. Kingsbury, New types of deep neural network learning for speech recognition and related applications: An overview, in Proc. 2013 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vancouver, Canada, 2013, pp. 8599–8603.
[2]
S. R. Shahamiri and S. S. B. Salim, A multi-views multi-learners approach towards dysarthric speech recognition using multi-nets artificial neural networks, IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 5, pp. 1053–1063, 2014.
[3]
S. R. Shahamiri and S. S. B. Salim, Artificial neural networks as speech recognisers for dysarthric speech: Identifying the best-performing set of MFCC parameters and studying a speaker-independent approach, Adv. Eng. Inf., vol. 28, no. 1, pp. 102–110, 2014.
[4]
H. Bourlard and N. Morgan, Connectionist Speech Recognition: A Hybrid Approach. Boston, MA, USA: Kluwer Academic Publishers, 1994.
DOI
[5]
C. España-Bonet and J. A. R. Fonollosa, Automatic speech recognition with deep neural networks for impaired speech, in Proc. 3rd Int. Conf. on Advances in Speech and Language Technologies for Iberian Languages, Lisbon, Portugal, 2016, pp. 97–107.
[6]
H. Sak, A. W. Senior, K. Rao, and F. Beaufays, Fast and accurate recurrent neural network acoustic models for speech recognition, in Proc. 16th Annu. Conf. of the Int. Speech Communication Association, Dresden, Germany, 2015, pp. 1468–1472.
[7]
W. Chan, N. Jaitly, Q. Le, and O. Vinyals, Listen, attend and spell: A neural network for large vocabulary conversational speech recognition, in Proc. 2016 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Shanghai, China, 2016, pp. 4960–4964.
[8]
C. C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, et al., State-of-the-art speech recognition with sequence-to-sequence models, in Proc. 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Calgary, Canada, 2018, pp. 4774–4778.
[9]
T. Hori, J. Cho, and S. Watanabe, End-to-end speech recognition with word-based Rnn language models, in Proc. 2018 IEEE Spoken Language Technology Workshop, Athens, Greece, 2018, pp. 389–396.
[10]
O. Abdel-Hamid, A. R. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, Convolutional neural networks for speech recognition, IEEE/ACM Trans. Audio Speech Lang Process., vol. 22, no. 10, pp. 1533–1545, 2014.
[11]
B. Vachhani, C. Bhat, B. Das, and S. K. Kopparapu, Deep autoencoder based speech features for improved dysarthric speech recognition, in Proc. 18th Annu. Conf. of the Int. Speech Communication Association, Stockholm, Sweden, 2017, pp. 1854–1858.
[12]
Q. Zhang, H. Lu, H. Sak, A. Tripathi, E. McDermott, S. Koo, and S. Kumar, Transformer transducer: A streamable speech recognition model with transformer encoders and RNN-T loss, in Proc. 2020 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Barcelona, Spain, 2020, pp. 7829–7833.
[13]
K. Rao, H. Sak, and R. Prabhavalkar, Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer, in Proc. 2017 IEEE Automatic Speech Recognition and Understanding Workshop, Okinawa, Japan, 2017, pp. 193–199.
[14]
Y. Wang, X. Deng, S. Pu, and Z. Huang, Residual convolutional CTC networks for automatic speech recognition, arXiv preprint arXiv: 1702.07793, 2017.
[15]
L. Dong, S. Xu, and B. Xu, Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition, in Proc. 2018 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Calgary, Canada, 2018, 5884–5888.
[16]
X. Chen, Y. Wu, Z. Wang, S. Liu, and J. Li, Developing real-time streaming transformer transducer for speech recognition on large-scale dataset, in Proc. 2021 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Toronto, Canada, 2021, 5904–5908.
[17]
A. Singh, V. Kadyan, M. Kumar, and N. Bassan, ASRoIL: A comprehensive survey for automatic speech recognition of Indian languages, Artif. Intell. Rev., vol. 53, no. 5, pp. 3673–3704, 2020.
[18]
S. Jaglan, S. Dhull, and K. K. Singh, Tertiary wavelet model based automatic epilepsy classification system, Int. J. Intell. Unmanned. Syst.,.
Publication history
Copyright
Rights and permissions

Publication history

Received: 31 May 2022
Revised: 06 June 2022
Accepted: 21 June 2022
Published: 24 November 2022
Issue date: March 2023

Copyright

© The author(s) 2023.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return