How well do contextual protein encodings learn structure, function, and evolutionary context?

上下文蛋白质编码对结构、功能和进化背景的学习效果如何?

阅读:3

Abstract

In proteins, the optimal residue at any position is determined by its structural, evolutionary, and functional contexts-much like how a word may be inferred from its context in language. We trained masked label prediction models to learn representations of amino acid residues in different contexts. We focus questions on evolution and structural flexibility and whether and how contextual encodings derived through pretraining and fine-tuning may improve representations for specialized contexts. Sequences sampled from our learned representations fold into template structure and reflect sequence variations seen in related proteins. For flexible proteins, sampled sequences traverse the full conformational space of the native sequence, suggesting that plasticity is encoded in the template structure. For protein-protein interfaces, generated sequences replicate wild-type binding energies across diverse interfaces and binding strengths in silico. For the antibody-antigen interface, fine-tuning recapitulate conserved sequence patterns, while pretraining on general contexts improves sequence recovery for the hypervariable H3 loop. A record of this paper's transparent peer review process is included in the supplemental information.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。