Abstract
This study investigated the effect of temporal misalignment between acoustic and simulated electric signals on the ability to process fast speech in normal-hearing listeners. The within-ear integration of acoustic and electric hearing was simulated, mimicking the electric-acoustic stimulation (EAS) condition, where cochlear implant users receive acoustic input at low frequencies and electric stimulation at high frequencies in the same ear. Time-compression thresholds (TCTs), defined as the 50% correct performance for time-compressed sentences, were adaptively measured in quiet and in speech-spectrum noise (SSN) as well as amplitude-modulated noise (AMN) at 4 dB and 10 dB signal-to-noise ratio (SNR). Temporal misalignment was introduced by delaying the acoustic or the simulated electric signals, which were generated using a low-pass filter (cutoff frequency: 600 Hz) and a five-channel noise vocoder, respectively. Listeners showed significant benefits from the addition of low-frequency acoustic signals in terms of TCTs, regardless of temporal misalignment. Within the range from 0 ms to ±30 ms, temporal misalignment decreased listeners' TCTs, and its effect interacted with SNR such that the adverse impact of misalignment was more pronounced at higher SNR levels. When misalignment was limited to within ±7 ms, which is closer to the clinically relevant range, its effect disappeared. In conclusion, while temporal misalignment negatively affects the ability of listeners with simulated EAS hearing to process fast sentences in Mandarin, its effect is negligible when it is close to a clinically relevant range. Future research should validate these findings in real EAS users.