Abstract
PURPOSE: Advancements in artificial intelligence (AI), such as natural language processing (NLP), neural networks (NN), and machine learning (ML), are often used in various fields of science, also to successfully predict acts of suicide. This, however, raises seve-ral ethical and practical concerns. In this study, we explore the moral and technological challenges involved, as well as the potential applications of AI in suicide prevention. VIEWS: According to the literature, AI can be used to assist clinicians in identifying and addressing mental health issues by incorporating data from social media platforms, health records, and conversations with chatbots or between users. This information can be integrated into algorithms to develop solutions. Our analysis of the articles reviewed suggests that, with the vast amount of data put into them, AI systems might be able to predict suicidal tendencies, provide faster diagnoses, and improve healthcare by providing clinicians with an additional tool to help identify patients in need of assistance. However, ethical dilemmas must be addressed, including concerns over the invasion of privacy, the risk of data leaks due to insufficient security, and potential algorithmic biases deriving from the datasets on which these systems are trained. CONCLUSIONS: AI algorithms can help prevent and predict suicide by analyzing data from medical records, social media, and clinical databases. However, challenges like securing personal data and avoiding discrimination must be addressed. Proper programming and access control are crucial for ethical use. Despite these issues, AI's advantages and resolvable limitations make it a promising tool in the attempt to reduce suicide rates.