Electroencephalogram (EEG) -based emotion recognition is an essential intelligent technique for health assessment and clinical intervention. However, EEG signals exhibit complex and complementary non-linear correlations across spatio-temporal-frequency domains, posing significant challenges to effective feature modeling and downstream emotion recognition performance. To address these challenges, an Emotional Spatio-Temporal-Spectral Cross-attention Network (ESTSCA-Net) is proposed. The proposed model adopts a dual-branch feature fusion architecture: in the spatio-temporal branch, a multi-scale 2D convolutional network is designed to sequentially process spatio-temporal information, adaptively capturing the contextual dependencies of neural activities; in the spatio-spectral branch, a 3D bottleneck residual network with channel-wise and cross-frequency attention mechanisms is developed to selectively encode critical spatio-spectral neural oscillations. Furthermore, a bidirectional multi-head cross-attention interaction strategy is introduced to achieve deep fusion of spatio-temporal-spectral features, forming an effective emotion representation classifier. Experimental results on the public DEAP and MEEG datasets demonstrate that ESTSCA-Net can comprehensively extract spatio-temporal-spectral EEG features across different emotional states and consistently outperforms state-of-the-art baseline models in both arousal and valence metrics.
- Article type
- Year
- Co-author
Open Access
Issue
Open Access
Issue
Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System (BDS). However, most existing approaches to this issue involve supervised machine learning (ML) methods, and it is difficult to move to unsupervised multipath signal recognition because of the limitations in signal labeling. Inspired by an autoencoder with powerful unsupervised feature extraction, we propose a new deep learning (DL) model for BDS signal recognition that places a long short-term memory (LSTM) module in series with a convolutional sparse autoencoder to create a new autoencoder structure. First, we propose to capture the temporal correlations in long-duration BeiDou satellite time-series signals by using the LSTM module to mine the temporal change patterns in the time series. Second, we develop a convolutional sparse autoencoder method that learns a compressed representation of the input data, which then enables downscaled and unsupervised feature extraction from long-duration BeiDou satellite series signals. Finally, we add an
京公网安备11010802044758号