Abstract
Brain-Computer Interfaces (BCI) facilitate interaction with devices, enhancing the quality of life for individuals with disabilities and offering a more direct method for controlling smart devices. Auditory BCIs commonly utilize event-related potentials (ERPs) necessitating a sequential presentation of choices through auditory stimuli. However, such methods impose constraints on the achievable Information Transfer Rate (ITR) compared to visual BCIs due to extended stimulus presentation times. Here, we introduce an auditory BCI approach in which the selective representation of attended speech in a listener's brain enables the decoding of one target sound source from the background. The simultaneous delivery of options in our proposed method reduces presentation durations by 2.5x compared to previous auditory BCI paradigms. This approach yields an average ITR exceeding 17 bits/min, with the best subject surpassing 33 bits/min. By outdoing current state-of-the-art auditory BCI paradigms, our research represents a significant advancement in the development of practical auditory BCI technologies.