Abstract
AI is reshaping medical research and healthcare delivery, yet the translation of AI innovations into clinically approved medical devices remains limited. This article explores the critical role of regulatory frameworks in bridging this translational gap, with a focus on the full-lifecycle supervision model proposed by China's National Medical Products Administration (NMPA). We first outline the inherent characteristics and risks of AI that challenge conventional evaluation approaches. By examining a patient-centered AI ecosystem encompassing academia, industry, and regulatory bodies, we highlight the misalignment between preclinical AI research output and the relatively small number of approved AI medical devices (AIMDs). In response, we provide a systematic mapping between AI characteristics and corresponding regulatory control measures, offering a point-to-point interpretation of the NMPA's approach. We argue that effective evaluation must extend beyond performance metrics to include development processes and non-functional attributes such as safety, usability, and explainability. A structured, actionable checklist is proposed to guide the comprehensive assessment of AIMDs throughout their lifecycle. This framework aims to enhance regulatory clarity, promote safe deployment, and ultimately improve public trust and patient outcomes in the era of AI-powered medicine. CRITICAL RELEVANCE STATEMENT: This framework aims to improve regulatory clarity, supporting safe deployment of AI medical devices, enhancing public trust, and ultimately optimizing patient outcomes in AI-powered healthcare. KEY POINTS: Despite the rapid AI advancement, the number of approved AI medical devices remains disproportionately small, revealing a translational gap. The study identifies several intrinsic characteristics of AI that contribute to regulatory complexity and potential safety risks in clinical practice. A point-to-point mapping is established between AI characteristics and regulatory control measures, providing an interpretation of NMPA full-lifecycle supervision model. A detailed actionable checklist is proposed, extending beyond algorithmic performance, thereby promoting transparent and reproducible AIMDs development. The framework provides a policy-relevant pathway for harmonizing AI innovation with regulatory oversight, fostering patient-centered integration of AI into healthcare.