Abstract
INTRODUCTION: Cochlear implant (CI) patients who are single-sided deaf can match the sound quality of speech presented to their CI ear and speech presented to their normal hearing ear. Previous work using this patient population has generated acoustic approximations of CI sound quality for speech, achieving high similarity ratings through interactive manipulation of sound parameters such as filtering, pitch shifting, and spectral smearing. The present study aimed to extend this approach to music. METHODS: A digital audio workstation (DAW) methodology was developed for generating sound quality matches to both speech and music in 11 adults with unilateral MED-EL CIs and contralateral acoustic hearing. Participants compared the sound quality created by acoustically manipulated signals presented to their better hearing ear with the sound quality of unprocessed signals presented to the CI ear. The similarity of the two signals was rated on a scale of 1 to 10 with 10 indicating a perfect match. RESULTS: On average, speech matches achieved higher similarity ratings (9.3) than music matches (6.7). Speech matches were typically achieved using bandpass filtering, pitch shifts, and distortion. Similarity ratings for speech using the digital audio workstation (9.3) were not different from those (8.7) using the custom, speech-specific software of previous studies. Music matches frequently required additional manipulations, including frequency equalization and modulation. The specific manipulations required varied across participants, and several individuals could not complete music matches despite extensive attempts. DISCUSSION: These findings suggest that music introduces perceptual dimensions not fully addressed by speech-based matching procedures. The DAW methodology provides an accessible framework for investigating CI sound quality and may guide future efforts to characterize and optimize sound quality for signals beyond speech.