Abstract
Digital therapeutics (DTx) and other forms of therapeutic artificial intelligence (AI) are becoming increasingly embedded in healthcare systems and everyday health practices. Unlike earlier forms of medical AI, which mainly support diagnostic decision-making, therapeutic AI systems interact continuously with patients and may influence health-related behaviour over time. These characteristics raise ethical, legal, and social implications (ELSI) that extend beyond conventional concerns about safety, efficacy, and algorithmic performance. This article examines how such concerns are translated into institutional practice by reconceptualizing ELSI as governance infrastructure. Drawing on a narrative review combined with comparative institutional analysis, it analyses regulatory frameworks, policy documents, and governance arrangements in two advanced digital health jurisdictions: the European Union and Japan. The analysis identifies two distinct governance models. The European Union has developed a layered regulatory framework in which the Artificial Intelligence Act operates alongside the Medical Device Regulation, the General Data Protection Regulation, and emerging health-data governance initiatives. Japan, by contrast, governs therapeutic AI primarily through sectoral legislation centred on the Pharmaceuticals and Medical Devices Act, complemented by administrative guidance and professional mediation. These approaches illustrate different ways of embedding ELSI within digital health governance. The EU model emphasizes codified ex ante obligations and legally binding compliance mechanisms, whereas the Japanese model places greater weight on adaptive oversight and post-market learning. Effective governance of therapeutic AI, the article argues, requires institutional infrastructures capable of addressing behavioural influence, lifecycle change, and accountability across AI-enabled therapeutic systems.