Hybrid EEG-fNIRS phoneme classification based on imagined and perceived speech

基于想象和感知语音的混合脑电-近红外光谱音素分类

阅读:2

Abstract

INTRODUCTION: Individuals affected by severe motor impairments often have no means of communicating with others. To build an intuitive speech prosthesis, imagined speech brain-computer interface research began to prosper with numerous studies attempting to classify imagined speech from brain signals. While unimodal neuroimaging techniques, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been widely used, multimodal approaches combining two or more of them remain scarce. METHODS: In this study offline phoneme decoding based on hybrid EEG-fNIRS data was performed. Twenty-two right-handed participants performed imagined and perceived speech trials encompassing four phonemes /a/,/i/,/b/ and /k/. Features in the form of power spectral densities and mean hemoglobin concentration changes were extracted from EEG and fNIRS data, respectively. Features were ranked according to the mutual information criterion relative to the target vector, and the optimal number of features to include was determined through optimization via 10-fold cross-validation. RESULTS: Hybrid classification yielded accuracy scores of 77.29% and 76.05% regarding imagined and perceived speech, respectively. In both conditions, hybrid and EEG-based classification performances did not differ significantly, while fNIRS based phoneme discrimination produced lower accuracies. DISCUSSION: This study represents an innovative phoneme decoding attempt based on multimodal EEG-fNIRS data, both in terms of imagined speech and perception. Four-class imagined speech classification was primarily driven by EEG features yet outperformed comparable previous studies.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。