Abstract
MOTIVATION: Protein-protein interactions (PPIs) are central to many biological processes, influencing cellular functions and offering potential therapeutic insights. The increasing availability of genomic and proteomic data has propelled computational approaches, particularly machine learning (ML), to the forefront of PPI prediction, addressing limitations of traditional experimental methods such as scalability and cost. This paper presents an in-depth review of modern ensemble learning models for PPI prediction, specifically focusing on XGBoost, Gradient Boosting, LightGBM, and Random Forest. Ensemble learning models are particularly noteworthy for their ability to surpass traditional approaches by leveraging the combined strengths of multiple base learners. Unlike previous reviews that often concentrate on theoretical comparisons or a limited selection of methods, this study offers a balanced analysis that includes both empirical findings and experimental evaluations. The review examines recent advancements in the field and assesses these models based on critical factors such as scalability, interpretability, accuracy, and efficiency, providing a structured framework to highlight nuanced differences among the techniques. SCIENTIFIC CONTRIBUTION: The article evaluates several ML techniques for PPI detection, with LightGBM and XGBoost emerging as the most effective methods due to their high accuracy and computational efficiency, particularly for large and complex datasets. Gradient Boosting and Random Forest also demonstrate strong performance, with Gradient Boosting excelling in capturing non-linear relationships and Random Forest offering robust interpretability. LightGBM stands out for its scalability and efficiency, while XGBoost benefits from built-in regularization to reduce overfitting. RESULTS: Experimental evaluations across three benchmark datasets-DIP, HPRD, and STRING-demonstrated that LightGBM consistently achieved the highest performance among all models, with average accuracy reaching up to 86%, and top sensitivity, specificity, and precision scores. XGBoost followed closely, showing strong generalization and robustness due to its regularization capabilities. Gradient Boosting provided competitive accuracy but lagged slightly in computational efficiency. Random Forest, while the most interpretable, showed comparatively lower sensitivity but maintained solid precision, highlighting its strength in minimizing false positives. Overall, LightGBM and XGBoost outperformed other methods in handling large, complex, and imbalanced PPI datasets. These results affirm the suitability of advanced ensemble learning techniques for enhancing PPI prediction and offer practical guidance on selecting models based on performance priorities and application constraints.