Quantifying Population-level Neural Tuning Functions Using Ricker Wavelets and the Bayesian Bootstrap

利用 Ricker 小波和贝叶斯自举法量化群体水平的神经调谐函数

阅读:1

Abstract

Experience changes the tuning of sensory neurons, including neurons in retinotopic visual cortex, as evident from work in humans and non-human animals. In human observers, visuo-cortical re-tuning has been studied during aversive generalization learning paradigms, in which the similarity of generalization stimuli (GSs) with a conditioned threat cue (CS+) is used to quantify tuning functions. This work utilized pre-defined tuning shapes reflecting prototypical generalization (Gaussian) and sharpening (Difference-of-Gaussians) patterns. This approach may constrain the ways in which re-tuning can be characterized, for example if tuning patterns do not match the prototypical functions or represent a mixture of functions. The present study proposes a flexible and data-driven method for precisely quantifying changes in neural tuning based on the Ricker wavelet function and the Bayesian bootstrap. The method is illustrated using data from a study in which university students (n = 31) performed an aversive generalization learning task. Oriented gray-scale gratings served as CS+ and GSs and a white noise served as the unconditioned stimulus (US). Acquisition and extinction of the aversive contingencies were examined, while steady-state visual event potentials (ssVEP) and alpha-band (8-13 Hz) power were measured from scalp EEG. Results showed that the Ricker wavelet model fitted the ssVEP and alpha-band data well. The pattern of re-tuning in ssVEP amplitude across the stimulus gradient resembled a generalization (Gaussian) shape in acquisition and a sharpening (Difference-of-Gaussian) shape in an extinction phase. As expected, the pattern of re-tuning in alpha-power took the form of a generalization shape in both phases. The Ricker-based approach led to greater Bayes factors and more interpretable results compared to prototypical tuning models. The results highlight the promise of the current method for capturing the precise nature of visuo-cortical tuning functions, unconstrained by the exact implementation of prototypical a-priori models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。