Abstract
PURPOSE: Glaucoma is a leading cause of irreversible blindness worldwide, necessitating precise visual field (VF) assessments for effective diagnosis and management. The ability to accurately digitize VF reports is critical for maximizing the utility of the data gathered from clinical evaluations. METHODS: In response to the challenges associated with data accessibility in digitizing VF reports, we developed a lightweight convolutional neural network (CNN) framework. Using a decade-long dataset comprising 15,000 reports, we preprocessed portable document format files and standardized the extracted textual data into 48 × 48 pixel images. To enhance the model's generalization capabilities, we incorporated a variety of font types into the dataset. RESULTS: The proposed CNN model achieved 100% accuracy in extracting numerical values and over 98.6% accuracy in metadata recognition. Post-processing correction using keyword mapping further improved metadata reliability, effectively addressing errors caused by visually similar characters. The model demonstrated superior efficiency compared to manual data entry, significantly reducing processing time while maintaining near-perfect accuracy. CONCLUSIONS: The findings highlight the effectiveness of our AI-driven digitization method in accurately interpreting Humphrey VF images. This advanced framework provides a reliable solution to digitizing complex visual field reports, thereby facilitating enhanced clinical workflows. TRANSLATIONAL RELEVANCE: The implications of this study extend to streamlined clinical workflows and AI-based report interpretation. By enabling comprehensive trend analysis of visual field changes, our model represents a significant advancement in glaucoma care, showcasing the transformative potential of AI-driven technologies in enhancing precision medicine and improving patient outcomes.