Abstract
BACKGROUND: A model using stacked hourglass neural network modules was developed for automated lower limb alignment measurements in full-leg frontal radiographs. The goal was to assess whether high accuracy could be achieved with training on relatively small datasets, a common constraint in clinical research. METHODS: The model first identifies joints regions of interest (ROIs), then detects anatomical landmarks for alignment measurement within each ROI. 112 bilateral radiographs (216 annotated single-leg images) were used for training and validation, 99 (186 legs) as internal test set, and 25 (49) as external test set. Performance metrics included intraclass correlation coefficient (ICCs) and mean absolute error (MAE). RESULTS: The model achieved excellent agreement (ICCs > 0.9) with manual annotations for all alignment parameters, except for the joint line convergence angle (ICC < 0.5), which also showed poor agreement between human readers (0.73). CONCLUSION: The proposed approach demonstrates high accuracy and reliability for automated lower limb alignment assessment, offering a robust solution for clinical research with limited annotated datasets.