Abstract
INTRODUCTION: Despite digital advances in healthcare, clinical neuropsychology has been slow to adopt automated assessment tools. Automated scoring of the Rey-Osterrieth Complex Figure Test (ROCFT) could enhance efficiency and consistency in evaluating quantitative and qualitative aspects of the figure. However, clinical utility and accuracy compared to traditional scoring methods remain unclear. OBJECTIVE: To evaluate whether digital automated scoring systems provide accuracy, reliability, and clinical utility equal to or superior to traditional clinician-driven scoring of the ROCFT. METHODS: A rapid review following the PRISMA guidelines was conducted. PubMed and Web of Science were searched from January 1, 2015, to October 12, 2025, and included recent studies that benchmark automated scoring against human raters. RESULTS: The review identified five articles, three with deep-learning approaches and two that used rule-based algorithms. Together, they analysed more than 41,000 ROCFT drawings with diverse capture methods. Overall, well-designed automated systems can achieve, and in some cases surpass, expert-level performance. The algorithmic approaches demonstrated close agreement with trained raters and reproducible outputs, with discrepancies primarily emerging in atypical drawings. Deep-learning models achieved high concordance with expert scoring when image quality was adequate, and training data were well-labelled. However, performance varied with data quality and distribution of scores. DISCUSSION AND CONCLUSION: This review demonstrates that digital automated ROCFT scoring achieves accuracy and reliability compared to traditional clinician ratings, with well-designed systems occasionally surpassing human performance. Expected advances in artificial intelligence and automation could further enhance clinical neuropsychology into the 21st century. However, clinical implementation faces several constraints: heterogeneous training datasets, limited evidence of usefulness across disorders, and lack of independent validation. Automated scoring should thus augment, not replace, clinical judgement. To address these limitations, future research should strive to establish disorder-specific norms, conduct independent validation in real-world clinical settings, and develop human-in-the-loop pipelines that combine automated efficiency with clinical oversight. Responsible implementation will require explicit governance frameworks that regulate data use and sharing, address privacy and 'mental data' concerns. These advances would strengthen the evidence-base and utility for automated ROCFT scoring and support its responsible integration into neuropsychological practice.