Intrinsic dataset features drive mutational effect prediction by protein language models

内在数据集特征驱动蛋白质语言模型进行突变效应预测

阅读:1

Abstract

Protein language models (pLMs) are commonly used for predicting protein fitness landscapes, but their wide range of performance across datasets remains poorly understood. We evaluated supervised transfer learning on 41 viral and 33 cellular deep-mutational-scanning (DMS) datasets using embeddings from multiple pLMs. We observed consistently lower predictive performance on viral datasets compared to cellular datasets, independent of model architecture or transfer learning strategy. Surprisingly, a simple baseline model that predicted site mean fitness matched or outperformed supervised models on many datasets, highlighting the dominant role of site effects. Analysis of site variability using two metrics, relative variability of site means (RVSM) and fraction of highly variable sites (FHVS), revealed that patterns of fitness variation within and among sites constrain model performance and largely explain the observed differences between viral and cellular datasets. Moreover, splitting training and test data by site, rather than pooling, revealed that supervised models often rely on site effects rather than capturing broader mutational patterns. These findings highlight limitations of current pLMs for mutational effect prediction and suggest that dataset composition, rather than model architecture or training, is the primary driver of predictive success. SIGNIFICANCE STATEMENT: Mutational effects prediction with protein language models tends to vary widely in prediction accuracy, depending on the dataset considered. While poor performance is commonly equated with poor model quality, we show here that intrinsic dataset features, such as the variability of fitness values within and among sites, are critical predictors of model performance. Moreover, we show that many existing benchmarks overestimate model performance, by allowing training data to leak into the test set. In fact, in many cases, protein language models barely outperform a naive predictor relying entirely on mean fitness values at individual sites. In aggregate, our study reveals that protein language models fail to capture site mutational constraints critical for fitness prediction, despite claims of learning local sequence context.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。