Abstract
PURPOSE: The purpose of this study was to develop and validate an artificial intelligence (AI) model for detecting and measuring choroidal mass dimensions on B-scan ophthalmic ultrasound images. METHODS: The study included 1822 still images and 130 cine loops of choroidal masses. For external validation, 180 additional still images were included, along with 374 control images to assess specificity. A two-stage U-Net-based architecture was trained to detect masses and measure dimensions. For cine loops, the algorithm automatically selected the frame with the largest mass cross-sectional area. RESULTS: In the internal subset, detection accuracy was 94.5% with a false-positive rate of 11.7%. For apical height, the mean absolute error (MAE) was 0.42 ± 0.58 mm (R² = 0.87), with 94.2% of cases within 1 mm of expert annotations. For basal diameter, the MAE was 1.02 ± 0.99 mm (R² = 0.74). In the external validation subset, detection accuracy was 83.9%, false-positive rate of 4.2%, and consistent millimeter-level precision for both apical height and basal diameter. Among cine loops, masses were detected in 99.2% of cases, with spatial awareness in 93.1%. Best-frame analysis yielded apical height within 1 mm of the reference in 68.2% of cases (MAE 1.10 ± 1.36 mm) and basal diameter with an MAE of 1.65 ± 1.84 mm. CONCLUSIONS: Deep learning provides reproducible, millimeter-level measurements of choroidal mass dimensions from still images and cine loops, supporting its potential use for monitoring of choroidal tumors. TRANSLATIONAL RELEVANCE: AI can generate precise measurement of choroidal tumors, enabling clinically actionable monitoring from ophthalmic ultrasound.