Facebook has announced that it is rolling out a “proactive detection” artificial intelligence (AI) tool that will look for signs that Facebook users might be suicidal. When the AI detects that someone might be at risk for suicide, it will provide mental health resources or alert first responders. Facebook has previously been testing the feature and says that it has led to more than 100 “wellness checks” for suicidal individuals.
The move is leading to continued discussions among developers and software users about what constitutes an appropriate use of AI. The suicide prevention capabilities will not be rolled out in the European Union, where this type of technology use violates the law. Elsewhere in the world, Facebook users cannot opt out of the feature, which has prompted some concerns.
However, Facebook CEO Mark Zuckerberg has expressed enthusiasm for expanding use of the technology, writing, “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”
Facebook’s chief security officer Alex Stamos responded to concerns about the AI use by tweeting, “The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in.”