825
Views
131
Downloads
0
Crossref
N/A
WoS
N/A
Scopus
N/A
CSCD
The fusion technique is the key to the multimodal emotion recognition task. Recently, cross-modal attention-based fusion methods have demonstrated high performance and strong robustness. However, cross-modal attention suffers from redundant features and does not capture complementary features well. We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction, and the features that can reinforce a modality may contain only a part of it. To this end, we design an innovative Transformer-based Adaptive Cross-modal Fusion Network (TACFN). Specifically, for the redundant features, we make one modality perform intra-modal feature selection through a self-attention mechanism, so that the selected features can adaptively and efficiently interact with another modality. To better capture the complementary information between the modalities, we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities. We apply TCAFN to the RAVDESS and IEMOCAP datasets. For fair comparison, we use the same unimodal representations to validate the effectiveness of the proposed fusion method. The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance. All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.
The fusion technique is the key to the multimodal emotion recognition task. Recently, cross-modal attention-based fusion methods have demonstrated high performance and strong robustness. However, cross-modal attention suffers from redundant features and does not capture complementary features well. We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction, and the features that can reinforce a modality may contain only a part of it. To this end, we design an innovative Transformer-based Adaptive Cross-modal Fusion Network (TACFN). Specifically, for the redundant features, we make one modality perform intra-modal feature selection through a self-attention mechanism, so that the selected features can adaptively and efficiently interact with another modality. To better capture the complementary information between the modalities, we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities. We apply TCAFN to the RAVDESS and IEMOCAP datasets. For fair comparison, we use the same unimodal representations to validate the effectiveness of the proposed fusion method. The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance. All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.
S. Zhao, G. Jia, J. Yang, G. Ding, and K. Keutzer, Emotion recognition from multiple modalities: Fundamentals and methodologies, IEEE Signal Process. Mag., vol. 38, no. 6, pp. 59–73, 2021.
S. Poria, D. Hazarika, N. Majumder, and R. Mihalcea, Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research, IEEE Trans. Affect. Comput., vol. 14, no. 1, pp. 108–132, 2023.
D. Nguyen, K. Nguyen, S. Sridharan, D. Dean, and C. Fookes, Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition, Comput. Vis. Image Underst., vol. 174, pp. 33–42, 2018.
L. Smith and M. Gasser, The development of embodied cognition: Six lessons from babies, Artif. Life, vol. 11, nos. 1–2, pp. 29, 2005.
W. Yu, H. Xu, Z. Yuan, and J. Wu, Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 12, pp. 10790–10797, 2021.
N. Neverova, C. Wolf, G. Taylor, and F. Nebout, ModDrop: Adaptive multi-modal gesture recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 8, pp. 1692–1706, 2016.
S. R. Livingstone and F. A. Russo, The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English, . PLoS One , vol. 13, no. 5, p. e0196391, 2018.
C. Busso, M. Bulut, C. C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan, IEMOCAP: Interactive emotional dyadic motion capture database, Lang. Resour. Eval., vol. 42, no. 4, pp. 335–359, 2008.
S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
Y. Wang, Y. Shen, Z. Liu, P. P. Liang, A. Zadeh, and L. -P. Morency, Words can shift: Dynamically adjusting word representations using nonverbal behaviors, Proc. AAAI Conf. Artif. Intell., vol. 33, no. 1, pp. 7216–7223, 2019.
H. Pham, P. P. Liang, T. Manzini, L. -P. Morency, and B. Póczos, Found in translation: Learning robust joint representations by cyclic translations between modalities, Proc. AAAI Conf. Artif. Intell., vol. 33, no. 1, pp. 6892–6899, 2019.
This study was supported by Beijing Key Laboratory of Behavior and Mental Health, Peking University.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).