Adaptive gradient scaling: integrating Adam and landscape modification for protein structure prediction

自适应梯度缩放:整合 Adam 和景观修正算法进行蛋白质结构预测

阅读:1

Abstract

BACKGROUND: Protein structure prediction is one of the most important scientific problems, on the one hand, it is one of the NP-hard problems, and on the other hand, it has a wide range of applications including drug discovery and biotechnology development. Since experimental methods for structure determination remain expensive and time-consuming, computational structure prediction offers a scalable and cost-effective alternative and application of machine learning in structural biology has revolutionized protein structure prediction. Despite their success, machine learning methods face fundamental limitations in optimizing complex high-dimensional energy landscapes, which motivates research into new methods to improve the robustness and performance of optimization algorithms. RESULTS: This study presents a novel approach to protein structure prediction by integrating the Landscape Modification (LM) method with the Adam optimizer for OpenFold. The main idea is to change the optimization dynamics by introducing a gradient scaling mechanism based on energy landscape transformations. LM dynamically adjusts gradients using a threshold parameter and a transformation function, thereby improving the optimizer's ability to avoid local minima, more efficiently traverse flat or rough landscape regions, and potentially converge faster to global or high-quality local optima. By integrating simulated annealing into the LM approach, we propose LM SA, a variant designed to improve convergence stability while facilitating more efficient exploration of complex landscapes. CONCLUSION: We compare the performance of standard Adam, LM, and LM SA on different datasets and computational conditions. Performance was evaluated using Loss function values, predicted Local Distance Difference Test (pLDDT), distance-based Root Mean Square Deviation (dRMSD), and Template Modeling (TM) scores. Our results show that LM and LM SA outperform the standard Adam across all metrics, showing faster convergence and better generalization, particularly on proteins not included in the training set. These results demonstrate that integrating landscape-aware gradient scaling into first-order optimizers advances research in computational optimization and improves prediction performance for complex problems such as protein folding.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。