ALLM-Ab: Active Learning-Driven Antibody Optimization Using Fine-Tuned Protein Language Models

ALLM-Ab:基于微调蛋白质语言模型的主动学习驱动抗体优化

阅读:1

Abstract

Antibody engineering requires a delicate balance between enhancing binding affinity and maintaining developability properties. In this study, we present ALLM-Ab (Active Learning with Language Models for Antibodies), a novel active learning framework that leverages fine-tuned protein language models to accelerate antibody sequence optimization. By employing parameter-efficient fine-tuning via low-rank adaptation, coupled with a learning-to-rank strategy, ALLM-Ab accurately assesses mutant fitness while efficiently generating candidate sequences through direct sampling from the model's probability distribution. Furthermore, by integrating a multiobjective optimization scheme incorporating antibody developability metrics, the framework ensures that optimized sequences retain therapeutic antibody-like properties alongside improved binding affinity. We validate ALLM-Ab in both offline experiments using deep mutational scanning (DMS) data from the BindingGYM data set and online active learning trials targeting Flex ddG energy minimization across 15 antigens. Results demonstrate that ALLM-Ab not only expedites the discovery of high-affinity variants compared to baseline Gaussian process regression and genetic algorithm-based approaches, but also preserves critical antibody developability metrics. This work lays the foundation for more efficient and reliable antibody design strategies, with the potential to significantly reduce therapeutic development costs.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。