Abstract
Foundation models (FMs) have been built to analyze single-cell data with different degrees of success. Here, we present scELMo (single-cell embedding from language models), a method for analyzing single-cell data with the help of large language models (LLMs). LLMs can generate both the description of metadata information and the embeddings for such descriptions. We then combine the embeddings from LLMs with the raw data under the zero-shot learning framework to further extend its function by using the fine-tuning framework to handle different tasks. We demonstrate that scELMo is capable of cell clustering, batch effect correction, and cell-type annotation without training a new model. Moreover, the fine-tuning framework of scELMo can help with more challenging tasks, including in silico treatment analysis or modeling perturbation. scELMo has a lighter structure and lower requirements for resources, suggesting a more promising path.