Abstract
Violence risk assessment is a critical component of psychiatric practice, with significant clinical, ethical, and legal implications. Psychiatric patients at high risk of violence often face interventions including restraints, intramuscular injections, and involuntary hospitalization. Agitated and aggressive behaviours from patients have been linked to high hospital costs due to increased length of stay, readmissions, increased medication use, staff injury, and need for high acuity monitoring. Traditional risk assessment tools can be time intensive and have poor generalizability to civil populations. Recent advances in artificial intelligence (AI) have the potential for enhancing the precision of violence risk assessments. Although AI can address the technical issues of risk assessment, its implementation will raise new ethical and legal challenges. In psychiatry, AI-assisted violence risk assessment intersects with mental health law, particularly criteria for preventive detention and the ethical boundaries of AI-driven decisions. There have been some early concerns about racial bias, lack of transparency, accountability, and disruption to current practices in psychiatric care. To our knowledge, there have been no efforts to synthesize the ethical and legal implications for this particular use case. To address these gaps, we conducted a scoping review to map the literature on the ethical and legal considerations of AI in violence risk assessment in acute psychiatry.