Decentralized Reinforcement Learning for Asymmetric Gene Network Interventions

用于非对称基因网络干预的去中心化强化学习

阅读:1

Abstract

Gene regulatory networks (GRNs) regulate essential cellular functions, and their dysregulation contributes to diseases such as cancer and autoimmune disorders. Designing effective interventions is challenging due to (i) the adaptive resistance of cells to therapies and (ii) the limited knowledge of genes' states during the intervention process through gene expression data. To address these challenges, this paper develops a decentralized deep reinforcement learning framework for intervention in GRNs. The intervention process is formulated as an asymmetric two-player zero-sum game, where the history-dependent intervention policy is derived against a cell that has complete knowledge of gene states. The optimal intervention policy is expressed as a Nash equilibrium policy, and a deep policy gradient approach is developed to approximate this policy. The analytical results demonstrate that under non-aggressive cell responses, the proposed intervention policy achieves higher-than-expected gains, ensuring robustness even against the most complex adaptive cellular responses. Furthermore, if the true system state becomes fully observable, the proposed method converges to the full-state Nash equilibrium. Numerical experiments on two benchmark GRN models, p53-MDM2 and melanoma regulatory networks, validate the proposed method, demonstrating its superior adaptability under uncertainty compared to state-of-the-art intervention strategies.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。