Abstract
Background/Objectives: The automatic identification of coronary artery dominance holds critical importance for clinical decision-making in cardiovascular medicine, influencing diagnosis, treatment planning, and risk stratification. Traditional classification methods rely on the manual visual interpretation of coronary angiograms. However, current deep learning approaches typically classify right and left coronary artery angiograms separately. This study aims to develop and evaluate an integrated video-based deep learning framework for classifying coronary dominance without distinguishing between RCA and LCA angiograms. Methods: Three advanced video-based deep learning models-Temporal Segment Networks (TSNs), Video Swin Transformer (VST), and VideoMAEv2-were implemented using the MMAction2 framework. These models were trained and evaluated on a large dataset derived from a publicly available source. The integrated approach processes entire angiographic video sequences, eliminating the need for separate RCA and LCA identification during preprocessing. Results: The proposed framework demonstrated strong performance in classifying coronary dominance. The best test accuracies achieved using TSNs, Video Swin Transformer, and VideoMAEv2 were 87.86%, 92.12%, and 92.89%, respectively. Transformer-based models showed superior accuracy compared to convolution-based methods, highlighting their effectiveness in capturing spatial-temporal patterns in angiographic videos. Conclusions: This study introduces a unified video-based deep learning approach for coronary dominance classification, eliminating manual arterial branch separation and reducing preprocessing complexity. The results indicate that transformer-based models, particularly VideoMAEv2, offer highly accurate and clinically feasible solutions, contributing to the development of objective and automated diagnostic tools in cardiovascular imaging.