Modeling auditory coding: from sound to spikes

听觉编码建模:从声音到脉冲

阅读:1

Abstract

Models are valuable tools to assess how deeply we understand complex systems: only if we are able to replicate the output of a system based on the function of its subcomponents can we assume that we have probably grasped its principles of operation. On the other hand, discrepancies between model results and measurements reveal gaps in our current knowledge, which can in turn be targeted by matched experiments. Models of the auditory periphery have improved greatly during the last decades, and account for many phenomena observed in experiments. While the cochlea is only partly accessible in experiments, models can extrapolate its behavior without gap from base to apex and with arbitrary input signals. With models we can for example evaluate speech coding with large speech databases, which is not possible experimentally, and models have been tuned to replicate features of the human hearing organ, for which practically no invasive electrophysiological measurements are available. Auditory models have become instrumental in evaluating models of neuronal sound processing in the auditory brainstem and even at higher levels, where they are used to provide realistic input, and finally, models can be used to illustrate how such a complicated system as the inner ear works by visualizing its responses. The big advantage there is that intermediate steps in various domains (mechanical, electrical, and chemical) are available, such that a consistent picture of the evolvement of its output can be drawn. However, it must be kept in mind that no model is able to replicate all physiological characteristics (yet) and therefore it is critical to choose the most appropriate model-or models-for every research question. To facilitate this task, this paper not only reviews three recent auditory models, it also introduces a framework that allows researchers to easily switch between models. It also provides uniform evaluation and visualization scripts, which allow for direct comparisons between models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。