Pretrained protein language models choose between sequence novelty and structural completeness

预训练的蛋白质语言模型会在序列新颖性和结构完整性之间进行选择。

阅读:1

Abstract

Protein language models (PLMs) have gained increasing acceptance in tasks ranging from variant effect prediction in disease to optimization and de novo design of proteins with improved stability, target-binding affinity, and catalytic performance. Despite encouraging performance in such applications, little is understood as far as the degree to which PLM-generated sequences - putative novel protein outputs - recapitulate the broad biophysical rules and diversity of sequence, structure, and function that defines natural protein-space, vital knowledge for boosting the design capacity of PLMs in ever-more-complex systems. Towards this end, we computationally profile and characterize the sequence and structure statistics and properties of hundreds of thousands of potential small proteins proposed through free unconstrained generation from architecturally distinct PLMs. We show that although these models exhibit a prodigious latent capacity to access novel amino-acid sequences, they struggle to approach the structural variation that exists on plain display in nature. Moreover, we uncover a stark tradeoff between prioritizing sequence novelty or structural breadth, exemplified by a "helical bundle trap" that dominates model output when aiming outside the comfortable bounds and evolutionary organization of natural sequences. These findings underscore a critical need for strategies that can rapidly guide PLMs into unlocking through generation the full richness of protein sequence, structure, and function that is consistent with governing biophysics but tantalizingly untapped as of yet in design contexts.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。