Abstract
In this account of healthcare AI (HAI), we discuss barriers and solutions to the implementation of a model with the example of model-driven suicide prevention. HAI models hold promise in preventing adverse outcomes by allowing for early identification and intervention of high-risk patients. While challenges exist in the application of these models, solutions are few. Implementing such models poses challenges, but these challenges are magnified in settings where problems are stigmatized, ethically fraught, and liability prone. HIV, suicide, sexually transmitted infections, substance use disorders remain a prominent example of such a clinical domain. The goal of this case study is to share key barriers to overcome and suggestions on mitigation strategies to successfully translate HAI into practice across four domains: data, algorithmic performance, implementation aspects, and evaluation. We illustrate choices that might aid translation using a real-world, implemented example at a major academic medical center.