Learning in Probabilistic Boolean Networks via Structural Policy Gradients

基于结构策略梯度的概率布尔网络学习

阅读:1

Abstract

We revisit Probabilistic Boolean Networks as trainable function approximators. The key obstacle, non-differentiable structural choices (which predictors to read and which Boolean operators to apply), is addressed by casting the PBN's structure as a stochastic policy whose parameters are optimized with score-function (REINFORCE) gradients. Continuous output heads (logistic/linear/softmax or policy logits) are trained with ordinary gradients. We call the resulting model a Learning PBN. We formalize the Learning Probabilistic Boolean Network, derive unbiased structural gradients with variance reduction, and prove a universal approximation property over discretized inputs. Empirically, Learning Probabilistic Boolean Networks approach ANN performance across classification (accuracy ↑), regression (RMSE ↓), representation quality via clustering (ARI ↑), and reinforcement learning (return ↑) while yielding interpretable, rule-like internal units. We analyze the effect of binning resolution, operator sets, and unit counts, and show how the learned logic stabilizes as training progresses. Our results indicate that PBNs can serve as general-purpose learners, competitive with ANNs in tabular/noisy regimes, without sacrificing interpretability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。