Abstract
Semantic neural decoding aims to identify which semantic concepts an individual focuses on at a given moment based on recordings of their brain activity. We investigated the feasibility of semantic neural decoding to develop a new type of brain-computer interface (BCI) that allows direct communication of semantic concepts, bypassing the character-by-character spelling used in current BCI systems. We provide data from our study to differentiate between two semantic categories of animals and tools during a silent naming task and three intuitive sensory-based imagery tasks using visual, auditory, and tactile perception. Participants were instructed to visualize an object (animal or tool) in their minds, imagine the sounds produced by the object, and imagine the feeling of touching the object. Simultaneous electroencephalography (EEG) and near-infrared spectroscopy (fNIRS) signals were recorded from 12 participants. Additionally, EEG signals were recorded from 7 other participants in a follow-up experiment focusing solely on the auditory imagery task. These datasets can serve as a valuable resource for researchers investigating semantic neural decoding, brain-computer interfaces, and mental imagery.