Contribution of Verbal Learning & Memory and Spectro-Temporal Discrimination to Speech Recognition in Cochlear Implant Users

言语学习与记忆以及频谱时间辨别能力对人工耳蜗植入者语音识别的贡献

阅读:1

Abstract

OBJECTIVES: Existing cochlear implant (CI) outcomes research demonstrates a high degree of variability in device effectiveness among experienced CI users. Increasing evidence suggests that verbal learning and memory (VL&M) may have an influence on speech recognition with CIs. This study examined the relations in CI users between visual measures of VL&M and speech recognition in a series of models that also incorporated spectro-temporal discrimination. Predictions were that (1) speech recognition would be associated with VL&M abilities and (2) VL&M would contribute to speech recognition outcomes above and beyond spectro-temporal discrimination in multivariable models of speech recognition. METHODS: This cross-sectional study included 30 adult postlingually deaf experienced CI users who completed a nonauditory visual version of the California Verbal Learning Test-Second Edition (v-CVLT-II) to assess VL&M, and the Spectral-Temporally Modulated Ripple Test (SMRT), an auditory measure of spectro-temporal processing. Participants also completed a battery of word and sentence recognition tasks. RESULTS: CI users showed significant correlations between some v-CVLT-II measures (short-delay free- and cued-recall, retroactive interference, and "subjective" organizational recall strategies) and speech recognition measures. Performance on the SMRT was correlated with all speech recognition measures. Hierarchical multivariable linear regression analyses showed that SMRT performance accounted for a significant degree of speech recognition outcome variance. Moreover, for all speech recognition measures, VL&M scores contributed independently in addition to SMRT. CONCLUSION: Measures of spectro-temporal discrimination and VL&M were associated with speech recognition in CI users. After accounting for spectro-temporal discrimination, VL&M contributed independently to performance on measures of speech recognition for words and sentences produced by single and multiple talkers. LEVEL OF EVIDENCE: 3 Laryngoscope, 133:661-669, 2023.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。