Journal Home > Volume 7 , Issue 1

Walking as a unique biometric tool conveys important information for emotion recognition. Individuals in different emotional states exhibit distinct walking patterns. For this purpose, this paper proposes a novel approach to recognizing emotion during walking using electroencephalogram (EEG) and inertial signals. Accurate recognition of emotion is achieved by training in an end-to-end deep learning fashion and taking into account multi-modal fusion. Subjects wear virtual reality head-mounted display (VR-HMD) equipment to immerse in strong emotions during walking. VR environment shows excellent imitation and experience ability, which plays an important role in awakening and changing emotions. In addition, the multi-modal signals acquired from EEG and inertial sensors are separately represented as virtual emotion images by discrete wavelet transform (DWT). These serve as input to the attention-based convolutional neural network (CNN) fusion model. The designed network structure is simple and lightweight while integrating the channel attention mechanism to extract and enhance features. To effectively improve the performance of the recognition system, the proposed decision fusion algorithm combines Critic method and majority voting strategy to determine the weight values that affect the final decision results. An investigation is made on the effect of diverse mother wavelet types and wavelet decomposition levels on model performance which indicates that the 2.2-order reverse biorthogonal (rbio2.2) wavelet with two-level decomposition has the best recognition performance. Comparative experiment results show that the proposed method outperforms other existing state-of-the-art works with an accuracy of 98.73%.


menu
Abstract
Full text
Outline
About this article

Attention-Based CNN Fusion Model for Emotion Recognition During Walking Using Discrete Wavelet Transform on EEG and Inertial Signals

Show Author's information Yan Zhao1Ming Guo2( )Xiangyong Chen2Jianqiang Sun2Jianlong Qiu2
School of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China, and also with the School of Automation and Electrical Engineering, Linyi University, Linyi 276000, China
School of Automation and Electrical Engineering, Linyi University, Linyi 276000, China

Abstract

Walking as a unique biometric tool conveys important information for emotion recognition. Individuals in different emotional states exhibit distinct walking patterns. For this purpose, this paper proposes a novel approach to recognizing emotion during walking using electroencephalogram (EEG) and inertial signals. Accurate recognition of emotion is achieved by training in an end-to-end deep learning fashion and taking into account multi-modal fusion. Subjects wear virtual reality head-mounted display (VR-HMD) equipment to immerse in strong emotions during walking. VR environment shows excellent imitation and experience ability, which plays an important role in awakening and changing emotions. In addition, the multi-modal signals acquired from EEG and inertial sensors are separately represented as virtual emotion images by discrete wavelet transform (DWT). These serve as input to the attention-based convolutional neural network (CNN) fusion model. The designed network structure is simple and lightweight while integrating the channel attention mechanism to extract and enhance features. To effectively improve the performance of the recognition system, the proposed decision fusion algorithm combines Critic method and majority voting strategy to determine the weight values that affect the final decision results. An investigation is made on the effect of diverse mother wavelet types and wavelet decomposition levels on model performance which indicates that the 2.2-order reverse biorthogonal (rbio2.2) wavelet with two-level decomposition has the best recognition performance. Comparative experiment results show that the proposed method outperforms other existing state-of-the-art works with an accuracy of 98.73%.

Keywords: emotion recognition, virtual reality, walking, attention mechanism, discrete wavelet transform, multi-modal fusion

References(33)

[1]

F. Y. N. Leung, J. Sin, C. Dawson, J. H. Ong, C. Zhao, A. Veić, and F. Liu, Emotion recognition across visual and auditory modalities in autism spectrum disorder: A systematic review and meta-analysis, Dev. Rev., vol. 63, p. 101000, 2022.

[2]

W. K. Ngai, H. Xie, D. Zou, and K. L. Chou, Emotion recognition based on convolutional neural networks and heterogeneous bio-signal data sources, Inf. Fusion, vol. 77, pp. 107–117, 2022.

[3]

P. Parada-Fernández, D. Herrero-Fernández, R. Jorge, and P. Comesaña, Wearing mask hinders emotion recognition, but enhances perception of attractiveness, Pers. Individ. Differ., vol. 184, p. 111195, 2022.

[4]

Y. Bhatia, A. H. Bari, G. J. Hsu, and M. Gavrilova, Motion capture sensor-based emotion recognition using a bi-modular sequential neural network, Sensors, vol. 22, no. 1, p. 403, 2022.

[5]

S. Qiu, Z. Wang, H. Zhao, K. Qin, Z. Li, and H. Hu, Inertial/magnetic sensors based pedestrian dead reckoning by means of multi-sensor fusion, Inf. Fusion, vol. 39, pp. 108–119, 2018.

[6]

H. Zhao, Z. Wang, S. Qiu, Y. Shen, L. Zhang, K. Tang, and G. Fortino, Heading drift reduction for foot-mounted inertial navigation system via multi-sensor fusion and dual-gait analysis, IEEE Sens. J., vol. 19, no. 19, pp. 8514–8521, 2019.

[7]

T. T. Pham and Y. S. Suh, Conditional generative adversarial network-based regression approach for walking distance estimation using waist-mounted inertial sensors, IEEE Trans. Instrum. Meas., vol. 71, pp. 1–13, 2022.

[8]

D. Seckiner, X. Mallett, P. Maynard, D. Meuwly, and C. Roux, Forensic gait analysis—Morphometric assessment from surveillance footage, Forensic Sci. Int., vol. 296, pp. 57–66, 2019.

[9]

S. Pal, S. Mukhopadhyay, and N. Suryadevara, Development and progress in sensors and technologies for human emotion recognition, Sensors, vol. 21, no. 16, p. 5554, 2021.

[10]

S. Qiu, H. Zhao, N. Jiang, Z. Wang, L. Liu, Y. An, H. Zhao, X. Miao, R. Liu, and G. Fortino, Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges, Inf. Fusion, vol. 80, pp. 241–265, 2022.

[11]

I. Mohino-Herranz, R. Gil-Pita, J. García-Gómez, M. Rosa-Zurera, and F. Seoane, A wrapper feature selection algorithm: An emotional assessment using physiological recordings from wearable sensors, Sensors, vol. 20, no. 1, p. 309, 2020.

[12]

P. Ekman, W. V. Friesen, M. O'Sullivan, A. Chan, I. Diacoyanni-Tarlatzis, K. Heider, R. Krause, W. A. LeCompte, T. Pitcairn, P. E. Ricci-Bitti, et al., Universals and cultural differences in the judgments of facial expressions of emotion, J. Pers. Soc. Psychol., vol. 53, no. 4, pp. 712–717, 1987.

[13]

J. A. Russell, A circumplex model of affect, J. Pers. Soc. Psychol., vol. 39, no. 6, pp. 1161–1178, 1980.

[14]

J. A. Russell and A. Mehrabian, Distinguishing anger and anxiety in terms of emotional response factors, J. Consult. Clin. Psychol., vol. 42, no. 1, pp. 79–83, 1974.

[15]

M. A. Hashmi, Q. Riaz, M. Zeeshan, M. Shahzad, and M. M. Fraz, Motion reveal emotions: Identifying emotions from human walk using chest mounted smartphone, IEEE Sens. J., vol. 20, no. 22, pp. 13511–13522, 2020.

[16]
Z. Zhang, Y. Song, L. Cui, X. Liu, and T. Zhu, Emotion recognition based on customized smart bracelet with built-in accelerometer, PeerJ, vol. 4, p. e2258, 2016.
DOI
[17]
M. Aslan, CNN based efficient approach for emotion recognition, J. King Saud Univ. Comput. Inf. Sci., vol. 34, no. 9, pp. 7335–7346, 2022.
DOI
[18]
A. V. Atanassov, D. I. Pilev, F. N. Tomova, and V. D. Kuzmanova, Hybrid system for emotion recognition based on facial expressions and body gesture recognition, in Proc. 2021 Int. Conf. Automatics and Informatics (ICAI), Varna, Bulgaria, 2021, pp. 135–140.
DOI
[19]
Y. Yin, X. Zheng, B. Hu, Y. Zhang, and X. Cui, EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM, Appl. Soft Comput., vol. 100, p. 106954, 2021.
DOI
[20]

T. Fan, S. Qiu, Z. Wang, H. Zhao, J. Jiang, Y. Wang, J. Xu, T. Sun, and N. Jiang, A new deep convolutional neural network incorporating attentional mechanisms for ECG emotion recognition, Comput. Biol. Med., vol. 159, p. 106938, 2023.

[21]

M. Borghetti, M. Serpelloni, E. Sardini, and O. Casas, Multisensor system for analyzing the thigh movement during walking, IEEE Sens. J., vol. 17, no. 15, pp. 4953–4961, 2017.

[22]

F. Y. Liang, F. Gao, and W. H. Liao, Synergy-based knee angle estimation using kinematics of thigh, Gait Posture, vol. 89, pp. 25–30, 2021.

[23]

S. Wang, Y. Wang, D. Liu, Z. Zhang, W. Li, C. Liu, T. Du, X. Xiao, L. Song, H. Pang, et al., A robust and self-powered tilt sensor based on annular liquid-solid interfacing triboelectric nanogenerator for ship attitude sensing, Sens. Actuat. A Phys., vol. 317, p. 112459, 2021.

[24]

W. Zhang, Y. Liu, S. Zhang, T. Long, and J. Liang, Error fusion of hybrid neural networks for mechanical condition dynamic prediction, Sensors, vol. 21, no. 12, p. 4043, 2021.

[25]
Y. Kim, J. Moon, N. J. Sung, and M. Hong, Correlation between selected gait variables and emotion using virtual reality, J. Ambient Intell. Humaniz. Comput., https://doi.org/10.1007/s12652-019-1456-2, 2019.
DOI
[26]

I. H. López-Nava and A. Muñoz-Meléndez, Wearable inertial sensors for human motion analysis: A review, IEEE Sens. J., vol. 16, no. 22, pp. 7821–7834, 2016.

[27]

Z. Wang, M. Guo, and C. Zhao, Badminton stroke recognition based on body sensor networks, IEEE Trans. Hum. Mach. Syst., vol. 46, no. 5, pp. 769–775, 2016.

[28]

Y. Zhao, M. Guo, X. Sun, X. Chen, and F. Zhao, Attention-based sensor fusion for emotion recognition from human motion by combining convolutional neural network and weighted kernel support vector machine and using inertial measurement unit signals, IET Signal Process., vol. 17, no. 4, p. e12201, 2023.

[29]
S. S. Bangaru, C. Wang, S. A. Busam, and F. Aghazadeh, ANN-based automated scaffold builder activity recognition through wearable EMG and IMU sensors, Autom. Constr., vol. 126, p. 103653, 2021.
DOI
[30]

M. Guo, Z. Wang, N. Yang, Z. Li, and T. An, A multisensor multiclassifier hierarchical fusion model based on entropy weight for human activity recognition using wearable inertial sensors, IEEE Trans. Hum. Mach. Syst., vol. 49, no. 1, pp. 105–111, 2019.

[31]

Q. Zhang, H. Zhang, K. Zhou, and L. Zhang, Developing a physiological signal-based, mean threshold and decision-level fusion algorithm (PMD) for emotion recognition, Tsinghua Science and Technology, vol. 28, no. 4, pp. 673–685, 2023.

[32]

S. Liu, P. Gao, Y. Li, W. Fu, and W. Ding, Multi-modal fusion network with complementarity and importance for emotion recognition, Inf. Sci., vol. 619, pp. 679–694, 2023.

[33]

Y. Xu, H. Su, G. Ma, and X. Liu, A novel dual-modal emotion recognition algorithm with fusing hybrid features of audio signal and speech context, Complex Intell. Syst., vol. 9, no. 1, pp. 951–963, 2023.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 03 November 2022
Revised: 09 June 2023
Accepted: 25 June 2023
Published: 25 December 2023
Issue date: March 2024

Copyright

© The author(s) 2023.

Acknowledgements

Acknowledgment

This work was supported by the National Natural Science Foundation of China (Nos. 61903170, 62173175, and 61877033), the Natural Science Foundation of Shandong Province (Nos. ZR2019BF045 and ZR2019MF021), and the Key Research and Development Project of Shandong Province of China (No.2019GGX101003).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return