Synaptic plasticity-based regularizer for artificial neural networks

基于突触可塑性的正则化器用于人工神经网络

阅读:1

Abstract

Regularization is an important tool for the generalization of ANN models. Due to the lack of constraints, it cannot guarantee that the model will work in a real environment with input data distribution changes. Inspired by neuroplasticity, this paper introduces a bounded regularization method that can be safely applied during the deployment phase. First, the reliability of neuron outputs is improved by extending our recent neuronal masking method to generate new supporting neurons. The model is then regularized by incorporating a synaptic connection module containing conenctions of the generated neurons to their previous layer. These connections are optimized online by introducing a synaptic rewiring process triggered by the information about the input distribution. This process is formulated as bilevel mixed-integer nonlinear programming (MINLP) with an objective to minimize the outer risk of the output by identifying the connections that minimize the inner risk of the neuron output. To address this optimization problem, a single-wave scheme is introduced to decompose the problem into smaller, parallel sub-problems that minimize the inner cost function while ensuring the aggregated solution to minimize the outer one. In addition, a storage/recovery memory module is proposed to memorize these connections and their corresponding risks, enabling the model to retrieve previous knowledge when encountering similar situations. Experimental results from classification and regression tasks show around 8% improvement in accuracy over state-of-the-art techniques. As a result, the proposed regularization method enhances the adaptability and robustness of ANN models in a variable environment.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。