Selective peripheral nerve recording using simulated human median nerve activity and convolutional neural networks

利用模拟人类正中神经活动和卷积神经网络进行选择性周围神经记录

阅读:1

Abstract

BACKGROUND: It is difficult to create intuitive methods of controlling prosthetic limbs, often resulting in abandonment. Peripheral nerve interfaces can be used to convert motor intent into commands to a prosthesis. The Extraneural Spatiotemporal Compound Action Potentials Extraction Network (ESCAPE-NET) is a convolutional neural network (CNN) that has previously been demonstrated to be effective at discriminating neural sources in rat sciatic nerves. ESCAPE-NET was designed to operate using data from multi-channel nerve cuff arrays, and use the resulting spatiotemporal signatures to classify individual naturally evoked compound action potentials (nCAPs) based on differing source fascicles. The applicability of this approach to larger and more complex nerves is not well understood. To support future translation to humans, the objective of this study was to characterize the performance of this approach in a computational model of the human median nerve. METHODS: Using a cross-sectional immunohistochemistry image of a human median nerve, a finite-element model was generated and used to simulate extraneural recordings. ESCAPE-NET was used to classify nCAPs based on source location, for varying numbers of sources and noise levels. The performance of ESCAPE-NET was also compared to ResNet-50 and MobileNet-V2 in the context of classifying human nerve cuff data. RESULTS: Classification accuracy was found to be inversely related to the number of nCAP sources in ESCAPE-NET (3-class: 97.8% ± 0.1%; 10-class: 89.3% ± 5.4% in low-noise conditions, 3-class: 70.3% ± 0.1%; 10-class: 52.5% ± 0.3% in high-noise conditions). ESCAPE-NET overall outperformed both MobileNet-V2 (3-class: 96.5% ± 1.1%; 10-class: 84.9% ± 1.7% in low-noise conditions, 3-class: 86.0% ± 0.6%; 10-class: 41.4% ± 0.9% in high-noise conditions) and ResNet-50 (3-class: 71.2% ± 18.6%; 10-class: 40.1% ± 22.5% in low-noise conditions, 3-class: 81.3% ± 4.4%; 10-class: 31.9% ± 4.4% in high-noise conditions). CONCLUSION: All three networks were found to learn to differentiate nCAPs from different sources, as evidenced by performance levels well above chance in all cases. ESCAPE-NET was found to have the most robust performance, despite decreasing performance as the number of classes increased, and as noise was varied. These results provide valuable translational guidelines for designing neural interfaces for human use.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。