27 November 2017
Facebook said Monday it would use artificial intelligence, AI, to identify posts and live videos where users are expressing suicidal thoughts.
Why it matters
The company has been under pressure for live-streaming violent and graphic incidents, while being accused of not having the human resources to successfully moderate live content on its platform.
Facebook’s Guy Rosen said the company would use “pattern recognition technology to help identify posts and live streams as likely to include thoughts of suicide.”
That technology will launch outside of the U.S. and, according to Facebook, ultimately be used around the world but not in the European Union.
The company will also be tasking more human employees with vetting reports of possible self-harm on the platform.
Here’s how it works
Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders.