Legal, ethical, and wider implications of suicide risk detection systems in social media platforms

社交媒体平台自杀风险检测系统的法律、伦理及更广泛的影响

阅读:1

Abstract

Suicide remains a problem of public health importance worldwide. Cognizant of the emerging links between social media use and suicide, social media platforms, such as Facebook, have developed automated algorithms to detect suicidal behavior. While seemingly a well-intentioned adjunct to public health, there are several ethical and legal concerns to this approach. For example, the role of consent to use individual data in this manner has only been given cursory attention. Social media users may not even be aware that their social media posts, movements, and Internet searches are being analyzed by non-health professionals, who have the decision-making ability to involve law enforcement upon suspicion of potential self-harm. Failure to obtain such consent presents privacy risks and can lead to exposure and wider potential harms. We argue that Facebook's practices in this area should be subject to well-established protocols. These should resemble those utilized in the field of human subjects research, which upholds standardized, agreed-upon, and well-recognized ethical practices based on generations of precedent. Prior to collecting sensitive data from social media users, an ethical review process should be carried out. The fiduciary framework seems to resonate with the emergent roles and obligations of social media platforms to accept more responsibility for the content being shared.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。