Predicting fixations and gaze location from EEG

利用脑电图预测注视点和视线位置

阅读:1

Abstract

Brain signals carry cognitive information that can be relevant in downstream tasks, but what about eye-gaze? Although this can be estimated with eye-trackers, it can be very convenient in practice to do it without extra equipment. We consider the challenging tasks of fixation prediction and gaze estimation from electroencephalography (EEG) using deep learning models. We argue that there are three critical design criteria when designing neural architectures for EEG: (1) the spatial and temporal dimensions of the data, (2) the local vs global nature of the data processing, and (3) the overall structure and order with which the steps (1) and (2) are orchestrated. We propose two model architectures, based on Transformers and LSTMs, with different variants in this large design space, and compare them with recent state-of-the-art (SOTA) approaches under two constraints: reduced EEG signal length and reduced set of EEG channels. Our Transformer-based model outperforms the LSTM-only model, but it turns out to be more sensitive with short signal lengths and with less number of channels. Interestingly, our results are similar or slightly better than SOTA, and the models are trained from scratch (i.e., without pre-training or fine-tuning). Our findings provide useful insights for advancing in eye-from-EEG tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。