Gene-LLMs: a comprehensive survey of transformer-based genomic language models for regulatory and clinical genomics

基因语言模型:基于Transformer的基因组语言模型在监管和临床基因组学中的综合综述

阅读:1

Abstract

The convergence of natural language processing (NLP) and genomics has given rise to a new class of transformer-based models-genome large language models (Gene-LLMs)-capable of interpreting the language of life at an unprecedented scale and resolution. These models represent a revolution in the field of bioinformatics since they use only raw nucleotide sequences, gene expression data, and multi-omic annotations, leveraging self-supervised pretraining to decipher complex regulatory grammars hidden within the genome. This survey presents a comprehensive overview of the Gene-LLM lifecycle, including stages such as raw data ingestion, k-mer or gene-level tokenization, and pretext learning tasks like masked nucleotide prediction and sequence alignment. We specify their wide range of applications, spanning crucial downstream activities such as finding the enhancer or promoter, modeling the chromatin state, predicting the RNA-protein interaction, and creating synthetic sequences. We further explore how Gene-LLMs have created an impact on functional genomics, clinical diagnostics, and evolutionary inference by analyzing recent benchmarks, including CAGI5, GenBench, NT-Bench, and BEACON. We also highlight recent advances encoder-decoder modifications and the incorporation of positional embeddings, a feature specific to living organisms, which may enhance both interpretability and translational potential. Finally, this study outlines a pathway toward federated genomic learning, multimodal sequence modeling, and low-resource adaptation for rare variant discovery, establishing Gene-LLMs as a cornerstone technology for the responsible and proactive future of biomedicine.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。