Spatiotemporal contribution to semantic decoding before speech onset: Intracranial EEG study

Poster No:

1014 

Submission Type:

Abstract Submission 

Authors:

Ye Jin Park1, Jii Kwon1, Chun Kee Chung1

Institutions:

1Seoul National University, Seoul, Seoul

First Author:

Ye Jin Park  
Seoul National University
Seoul, Seoul

Co-Author(s):

Jii Kwon  
Seoul National University
Seoul, Seoul
Chun Kee Chung  
Seoul National University
Seoul, Seoul

Introduction:

Speech processing involves auditory, semantic, and articulatory dimensions. Recent advancements in Brain-Computer Interface (BCI) systems have largely focused on auditory and articulatory information. However, this approach faces challenges when applied to individuals who lack articulatory capabilities. To address this gap, developing BCI systems that rely on semantic processing is needed. Since semantic processing needs time dependent processing in different brain areas, our study aims to elucidate these spatiotemporal contributions using intracranial EEGs.

Methods:

Four epilepsy patients with intracranial electrodes implanted to speech relevant areas participated in this study. Subjects performed a Korean word reading task with intracranial EEGs recorded. The spoken words were categorized into 2 semantic groups, specifically body vs. non-body parts, or subject vs. predicate. Preprocessing included detrending and applying the Common Average Reference (CAR), followed by bandpass filtering in two frequency ranges: 70-110 Hz (HG1) and 130-170 Hz (HG2). Firstly, we selected features with significant differences between the two categories. We then employed various classification algorithms such as Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Random Forest Classifier, to evaluate the decoding performance of semantic processing in successive 150ms epochs, from the presentation of a word to the onset of speech.

Results:

In this study, there were notable differences in brain activity between two semantic categories. Spatially, there was an initial activation of frontal areas, including the left inferior frontal gyrus (IFG), followed by the temporal areas, such as the left primary auditory cortex and the superior temporal gyrus (STG), with the time approaching the speech onset. The highest accuracy, 88.9% (±9% SE) in the left IFG, was achieved in distinguishing between body parts and non-body parts, occurring 150-300ms after word presentation. Between subjects and predicates, the highest accuracy was 76.7% (±13% SE) in the left pars triangularis, pars orbitalis, and primary auditory cortex, 450 to 600ms post-word presentation.

Conclusions:

In semantic processing, there were distinct temporal and spatial contributions. Our results are in line with the previous evoked response studies on N400 component in semantic processing, and previously identified speech related areas including the STG and the IFG. With decoding of spatiotemporal contribution, we could decode semantic processing, potentially extending the current limitation of speech BCI.

Language:

Language Comprehension and Semantics 1
Speech Production

Modeling and Analysis Methods:

Classification and Predictive Modeling 2

Novel Imaging Acquisition Methods:

Imaging Methods Other

Keywords:

Cortex
Data analysis
Language
Machine Learning

1|2Indicates the priority used for review

Provide references using author date format

Brouwer, H., Crocker, M. W., Venhuizen, N. J., & Hoeks, J. C. (2016). A neurocomputational model of the N400 and the P600 in language processing. Cognitive Science, 41, 1318–1352. https://doi.org/10.1111/cogs.12461
Brown, C., & Hagoort, P. (1993). The processing nature of the N400: Evidence from masked priming. Journal of Cognitive Neuroscience, 5(1), 34–44. https://doi.org/10.1162/jocn.1993.5.1.34
Bhaya-Grossman, I., & Chang, E. F. (2022). Speech computations of the human superior temporal gyrus. Annual Review of Psychology, 73(1), 79–102. https://doi.org/10.1146/annurev-psych-022321-035256
Rabbani, Q., Milsap, G., & Crone, N. E. (2019). The potential for a speech brain–computer interface using chronic electrocorticography. Neurotherapeutics, 16(1), 144–165. https://doi.org/10.1007/s13311-018-00692-2