Transfer learning on protein language models improves antimicrobial peptide classification

基于蛋白质语言模型的迁移学习可提高抗菌肽分类的准确性

阅读:1

Abstract

Antimicrobial peptides (AMPs) are essential components of the innate immune system in humans and other organisms, exhibiting potent activity against a broad spectrum of pathogens. Their potential therapeutic applications, particularly in combating antibiotic resistance, have rendered AMP classification a vital task in computational biology. However, the scarcity of labeled AMP sequences, coupled with the diversity and complexity of AMPs, poses significant challenges for the training of standalone AMP classifiers. Self-supervised learning has emerged as a powerful paradigm in addressing such challenges across various fields, leading to the development of Protein Language Models (PLMs). These models leverage vast amounts of unlabeled protein sequences to learn biologically relevant features, providing transferable protein sequence representations (embeddings), that can be fine-tuned for downstream tasks even with limited labeled data. This study evaluates the performance of several publicly-available PLMs in AMP classification utilizing transfer learning techniques and benchmarking them against state-of-the-art neural-based classifiers. Our key findings include: (a) Model scale is crucial, with classification performance consistently improving with increasing model size; (b) State-of-the-art results are achieved with minimal effort utilizing PLM embedding representations alongside shallow classifiers; and (c) Classification performance is further enhanced through efficient fine-tuning of PLMs' parameters. Code showcasing our pipelines is available at https://github.com/EliasGeorg/PLM_AMP_Classification .

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。