Journal Home > Volume 27 , Issue 1

N400 is an objective electrophysiological index in semantic processing for brain. This study focuses on the sensitivity of N400 effect during speech comprehension under the uni- and bi- modality conditions. Varying the Signal-to-Noise Ratio (SNR) of speech signal under the conditions of Audio-only (A), Visual-only (V, i.e., lip-reading), and Audio-Visual (AV), the semantic priming paradigm is used to evoke N400 effect and measure the speech recognition rate. For the conditions A and high SNR AV, the N400 amplitudes in the central region are larger; for the conditions of V and low SNR AV, the N400 amplitudes in the left-frontal region are larger. The N400 amplitudes of frontal and central regions under the conditions of A, AV, and V are consistent with speech recognition rate of behavioral results. These results indicate that audio-cognition is better than visual-cognition at high SNR, and visual-cognition is better than audio-cognition at low SNR.


menu
Abstract
Full text
Outline
About this article

Sensitivity of N400 Effect During Speech Comprehension Under the Uni- and Bi-Modality Conditions

Show Author's information Yanfei Lin( )Zhiwen LiuXiaorong Gao
School of Information and Electronics, Beijing Institute of Technology, Beijing 100084, China
Department of Biomedical Engineering, Tsinghua University, Beijing 100081, China

Abstract

N400 is an objective electrophysiological index in semantic processing for brain. This study focuses on the sensitivity of N400 effect during speech comprehension under the uni- and bi- modality conditions. Varying the Signal-to-Noise Ratio (SNR) of speech signal under the conditions of Audio-only (A), Visual-only (V, i.e., lip-reading), and Audio-Visual (AV), the semantic priming paradigm is used to evoke N400 effect and measure the speech recognition rate. For the conditions A and high SNR AV, the N400 amplitudes in the central region are larger; for the conditions of V and low SNR AV, the N400 amplitudes in the left-frontal region are larger. The N400 amplitudes of frontal and central regions under the conditions of A, AV, and V are consistent with speech recognition rate of behavioral results. These results indicate that audio-cognition is better than visual-cognition at high SNR, and visual-cognition is better than audio-cognition at low SNR.

Keywords: audio-visual speech, auditory noise, audio-visual integration, Signal-to-Noise Ratio (SNR)

References(35)

[1]
M. Kutas and K. D. Federmeier, Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP), Annu. Rev. Psychol., vol. 62, pp. 621-647, 2011.
[2]
M. Kutas and C. Van Petten, Event-related brain potentials studies of language, Adv. Psychophysiol., vol. 3, pp. 139-187, 1988.
[3]
M. Kutas, C. K. Van Petten, and R. Kluender, Psycholinguistics electrified II (1994-2005), in Handbook of Psycholinguistics, M. J. Traxler and M. A. Gernsbacher, eds, 2nd ed. Amsterdam, Holland: Elsevier, 2006, pp. 659-724.
DOI
[4]
P. J. Holcomb and H. J. Neville, Auditory and visual semantic priming in lexical decision: A comparison using event-related brain potentials, Lang. Cogn. Process., vol. 5, no. 4, pp. 281-312, 1990.
[5]
P. J. Holcomb and H. J. Neville, The electrophysiology of spoken sentence processing, Psychobiology, vol. 19, pp. 286-300, 1991.
[6]
C. Herbert and J. Kissler, Event-related potentials reveal task-dependence and inter-individual differences in negation processing during silent listening and explicit truth-value evaluation, Neuroscience, vol. 277, pp. 902-910, 2014.
[7]
A. W. K. Wong, Y. Wu, and H. C. Chen, Limited role of phonology in reading Chinese two-character compounds: Evidence from an ERP study, Neuroscience, vol. 256, pp. 342-351, 2014.
[8]
H. Shibata, J. Gyoba, and Y. Suzuki, Event-related potentials during the evaluation of the appropriateness of cooperative actions, Neurosci. Lett., vol. 452, no. 2, pp. 189-193, 2009.
[9]
C. Cornejo, F. Simonetti, A. Ibáñez, N. Aldunate, F. Ceric, V. López, and R. E. Núñez, Gesture and metaphor comprehension: Electrophysiological evidence of cross-modal coordination by audiovisual stimulation, Brain Cogn., vol. 70, no. 1, pp. 42-52, 2009.
[10]
P. J. Holcomb and J. E. Anderson, Cross-modal semantic priming: A time-course analysis using event-related brain potentials, Lang. Cogn. Process., vol. 8, no. 4, pp. 379-411, 1993.
[11]
G. Orgs, K. Lange, J. H. Dombrowski, and M. Heil, Conceptual priming for environmental sounds and words: An ERP study, Brain Cogn., vol. 62, no. 3, pp. 267-272, 2006.
[12]
D. Senkowski, D. Saint-Amour, S. P. Kelly, and J. J. Foxe, Multisensory processing of naturalistic objects in motion: A high-density electrical mapping and source estimation study, NeuroImage, vol. 36, no. 3, pp. 877-888, 2007.
[13]
T. R. Schneider, S. Debener, R. Oostenveld, and A. K. Engel, Enhanced EEG gamma-band activity reflects multisensory semantic matching in visual-to-auditory object priming, NeuroImage, vol. 42, no. 3, pp. 1244-1254, 2008.
[14]
T. R. Schneider, A. K. Engel, and S. Debener, Multisensory identification of natural objects in a two-way crossmodal priming paradigm, Exp. Psychol., vol. 55, no. 2, pp. 121-132, 2008.
[15]
B. L. Liu, G. N. Wu, Z. N. Wang, and X. Ji, Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: An ERP study, Neurosci. Lett., vol. 498, no. 1, pp. 84-88, 2011.
[16]
B. Liu, G. Wu, X. Meng, and J. Dang, Correlation between prime duration and semantic priming effect: Evidence from N400 effect, Neuroscience, vol. 238, pp. 319-326, 2013.
[17]
Z. Wu and Z. Cao, Improved MFCC-based feature for robust speaker identification, Tsinghua Science and Technology, vol. 10, no. 2, pp. 158-161, 2005.
[18]
H. Liu, Y. Li, and S. Wang, Exploiting sparse representation in the P300 speller paradigm, Tsinghua Science and Technology, vol. 26, no. 4, pp. 440-451, 2021.
[19]
D. Ortu, K. Allan, and D. I. Donaldson, Is the N400 effect a neurophysiological index of associative relationships? Neuropsychologia, vol. 51, no. 9, pp. 1742-1748, 2013.
[20]
M. A. Meredith and B. E. Stein, Spatial factors determine the activity of multisensory neurons in cat superior colliculus, Brain Res., vol. 365, no. 2, pp. 350-354, 1986.
[21]
D. Senkowski, D. Saint-Amour, M. Höfle, and J. J. Foxe, Multisensory interactions in early evoked brain activity follow the principle of inverse effectiveness, NeuroImage, vol. 56, no. 4, pp. 2200-2208, 2011.
[22]
R. A. Stevenson, M. Bushmakin, S. Kim, M. T. Wallace, A. Puce, and T. W. James, Inverse effectiveness and multisensory interactions in visual event-related potentials with audiovisual speech, Brain Topogr., vol. 25, no. 3, pp. 308-326, 2012.
[23]
L. A. Ross, D. Saint-Amour, V. M. Leavitt, D. C. Javitt, and J. J. Foxe, Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments, Cereb. Cortex, vol. 17, no. 5, pp. 1147-1153, 2007.
[24]
B. Liu, Y. Lin, X. Gao, and J. Dang, Correlation between audio-visual enhancement of speech in different noise environments and SNR: A combined behavioral and electrophysiological study, Neuroscience, vol. 247, pp. 145-151, 2013.
[25]
Y. F. Lin, B. L. Liu, Z. W. Liu, and X. R. Gao, EEG gamma-band activity during audiovisual speech comprehension in different noise environments, Cogn. Neurodyn., vol. 9, no. 4, pp. 389-398, 2015.
[26]
R. Fendrich, The merging of the senses, J. Cogn. Neurosci., vol. 5, no. 3, pp. 373-374, 1993.
[27]
A. Hahne and A. D. Friederici, Differential task effects on semantic and syntactic processes as revealed by ERPs, Cogn. Brain Res., vol. 13, no. 3, pp. 339-356, 2002.
[28]
D. van den Brink, C. M. Brown, and P. Hagoort, Electrophysiological evidence for early contextual influences during spoken-word recognition: N200 versus N400 effects, J. Cogn. Neurosci., vol. 13, no. 7, pp. 967-985, 2001.
[29]
J. Koppehele-Gossel, R. Schnuerch, and H. Gibbons, A brain electrical signature of left-lateralized semantic activation from single words, Brain Lang., vols. 157&158, pp. 35-43, 2016.
[30]
S. M. Sheppard, T. Love, K. J. Midgley, P. J. Holcomb, and L. P. Shapiro, Electrophysiology of prosodic and lexical-semantic processing during sentence comprehension in aphasia, Neuropsychologia, vol. 107, pp. 9-24, 2017.
[31]
J. M. Olichney, J. R. Taylor, J. Gatherwright, D. P. Salmon, A. J. Bressler, M. Kutas, and V. J. Iragui-Madoz, Patients with MCI and N400 or P600 abnormalities are at very high risk for conversion to dementia, Neurology, vol. 70, no. 19, pp. 1763-1770, 2008.
[32]
I. Steppacher, S. Eickhoff, T. Jordanov, M. Kaps, W. Witzke, and J. Kissler, N400 predicts recovery from disorders of consciousness, Ann. Neurol., vol. 73, no. 5, pp. 594-602, 2013.
[33]
J. M. Olichney, S. Chan, L. M. Wong, A. Schneider, A. Seritan, A. Niese, J. C. Yang, K. Laird, S. Teichholtz, S. Khan, et al., Abnormal N400 word repetition effects in fragile X-associated tremor/ataxia syndrome, Brain, vol. 133, no. 5, pp. 1438-1450, 2010.
[34]
K. Trimmel, J. Sachsenweger, G. Lindinger, E. Auff, F. Zimprich, and E. Pataraia, Lateralization of language function in epilepsy patients: A high-density scalp-derived event-related potentials (ERP) study, Clin. Neurophysiol., vol. 128, no. 3, pp. 472-479, 2017.
[35]
M. Kiang, F. Farzan, D. M. Blumberger, M. Kutas, M. C. McKinnon, V. Kansal, T. K. Rajji, Z. J. Daskalakis, Abnormal self-schema in semantic memory in major depressive disorder: Evidence from event-related brain potentials, Biol. Psychol., vol. 126, pp. 41-47, 2017.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 07 August 2020
Revised: 26 January 2021
Accepted: 28 January 2021
Published: 17 August 2021
Issue date: February 2022

Copyright

© The author(s) 2022

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Nos. 61601028 and 61431007), the Key R&D Program of Guangdong Province of China (No. 2018B030339001), and the National Key R&D Program of China (No. 2017YFB1002505).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return