The Forefront team at the Facebook headquarters in March 2017.

Forefront Suicide Prevention among partners shaping Facebook’s global suicide prevention AI upgrade

Nearly three years after Forefront Suicide Prevention began its historic collaboration with Facebook, a new era of suicide prevention tools is emerging. Yesterday, Facebook founder Mark Zuckerberg announced a more proactive upgrade to its artificial intelligence (AI), mentioning Forefront Suicide Prevention among the suicide prevention partners and first responders that have helped to shape different iterations of the social networking giant’s life-saving tools and protocols.

The upgraded technology is designed to scan all posts for patterns of suicidal thoughts, and send mental health resources to the user at risk or their friends, or contact local first responders.

“Suicide is one of the leading causes of death for young people, and this is a new approach to prevention,” Zuckerberg announced. “We’re going to keep working closely with our partners at SAVE, National Suicide Prevention Lifeline, Forefront Suicide Prevention, and first responders to keep improving.”

In March this year, several Forefront staffers visited Facebook’s Menlo Park, CA headquarters to provide another round of consultation shortly after the company announced new suicide prevention tools, resources, and intent to test AI’s pattern recognition-based detection abilities on worrisome posts.

“I think a lot of people would be surprised to learn just how sincere Facebook is about suicide prevention. I know I was,” Phoebe Terhaar, a program manager for Forefront Suicide Prevention in the Schools, said of her experience working with Facebook’s suicide prevention project team on the tools. “Any time we can improve the odds of letting a person who may be in crisis know that is help available and that they really do matter is a move in the right direction.”

Previously, the tools relied on user reports to detect posts that may indicate suicidal thoughts. The new AI upgrade proactively flags posts to human moderators around the globe and around the clock, which is believed to speed up help response times. Also, Facebook will now analyze all types of content globally (except the European Union, where stringent privacy laws apply).

“For better and for worse, social media has become a place where people reach out for help and sometimes share their suicidal thoughts and plans. Social media companies like Facebook have a responsibility to be proactive in this space,” said Forefront faculty director Jennifer Stuber. “There will always be concern about the misuse of AI technology in this realm, but with over 800,000 lives lost per year to suicide around the world, careful application of AI can’t come quickly enough.”

When Forefront first provided consultation on behalf of Facebook in early 2015, the social network had 1.39 billion global users. Now reaching 2.1 billion users, Facebook’s impact on social issues like suicide and mental health will continue to surge.

“This innovation will not only save lives, but will teach us more about help-seeking behavior and constructive responses to psychological pain,” said Stuber.

For more information on how Facebook is expanding its use of proactive detection, improving how they identify first responders and dedicating more reviewers, and solidifying its decade-long commitment to suicide prevention, read “Getting Our Community Help In Real-Time”.

Worried about someone you know on Facebook? Visit Facebook’s suicide prevention FAQ page or Family Safety Information for more information on what to do to keep loved ones safe. Information, support and referrals are also available by calling the National Suicide Prevention Lifeline at 800-273-8255 (TALK) or texting “741741” to the Crisis Text Line. In an emergency, call 911 or go immediately to the nearest hospital emergency room.