Abstract
MOTIVATION: As one of the recalcitrant challenges in life sciences and biomedicine, protein function prediction suffers from a deluge of AI-designed proteins, particularly having to face multi-modal information in the era of big data. Importing the high-throughput neural-network-based prediction framework to replace the low-throughput biological experiments, a universal multi-modal method is straightforward in addressing the growing gap between known sequences and predicting functions. RESULTS: To bridge the gap, we propose ProtGO, a three-step framework for predicting protein function, which leverages the credible Gene Ontology (GO) knowledge base and integrates four common modalities. Specifically, we first introduce frontier pre-trained protein language models (PLMs) for representation learning of mainstay functional protein sequences. For the remaining multi-modal data, we design a text alignment module for explainable text descriptions, a taxonomy encoding module for species-specific taxonomy, and a GO graph embedding module for biological GO relations. Each module is independent and adaptive for the referenced modalities. By harnessing these four knowledge representations, ProtGO maximizes the potential of GO resources, enhancing the performance of vanilla PLMs and biological language models (LMs) in downstream GO prediction tasks. Extensive experiments demonstrate that ProtGO significantly advances the abilities of state-of-the-art PLMs to predict protein functions: approximately 8% to 27% increase in the maximum F1 measure (Fmax) compared to base models. These comprehensive studies confirm ProtGO's capability to deliver outstanding performance in protein function prediction by utilizing a rich blend of functional and evolutionary knowledge. AVAILABILITY AND IMPLEMENTATION: Our source code and all the data are available at https://github.com/sunyatawang/ProtGO.