Facebook Using AI to Detect Potential Suicide

0
610

Suicide, just the morbid word tugs at your heartstrings, it’s such a tragic topic. In the past few months, three separate suicides were broadcast on Facebook Live, one of a young teen whose tragic video went viral. With the technological advancement of the world, serious events like this are becoming increasingly more public.

On December 30th, a 12-year-old girl, Katelyn Nicole Davis, live streamed while she hung herself in the front yard of her Cedartown, Georgia house. The suicide occurred after Davis announced that she had been physically and sexually abused by a family member.

After the 40-minute live stream was recorded, it got shared all over the internet and unfortunately took Facebook two hours to remove, causing controversy. Now, due to multiple live streamed suicides, Facebook is changing their policy.

It’s so important that we as a society reach out to those around us who show suicidal signs, but when we delegate that to technology, as Facebook is doing, things get sticky.

In a recent 5,500 word report by Mark Zuckerberg, he said:

“Based on feedback from experts, we are testing a streamlined reporting process using pattern recognition in posts previously reported for suicide. This artificial intelligence approach will make the option to report a post about “suicide or self-injury” more prominent for potentially concerning posts like these.

We’re also testing pattern recognition to identify posts as very likely to include thoughts of suicide. Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.”

Maybe Facebook has people’s best interests at heart, but looking at their past, I would say that’s pretty unlikely. Trust in Facebook has been eroding over time, in 2014, the company got exposed for doing a study in which they manipulated their user’s moods, and it has only gone downhill from there.

Facebook is testing AI the same way to identify terrorists. They state it will take years to get the algorithms fully trained, and currently are only testing in the United States. Allowing an AI that isn’t fully developed to suggest who is and who isn’t vulnerable to suicide and terrorism is dangerous in and of itself.

For a complete example of how corrupt an AI can become, look no further than Microsoft’s Twitter AI experiment Tay, who became a Nazi supporter in less than 24 hours.

Until now, Facebook relied on users friends to click the report button when they saw a suicidal post. Now an AI will be doing that.

What happens if the AI identifies a non-existent terrorist threat? What if a person with a dry sense of humor jokingly posts that they want to ‘kill themselves’? Will these people be put on a list and investigated? What are the ramifications? So many unanswered questions come to mind.

What do you think about giving an AI the power to comb through Facebook comments and posts? Do you think it will stay on its assigned task or will it end up being another disaster like Tay? Do you think Facebook has good intentions and will use the data they collect for the right purposes? Let me know in the comments.

Works Cited

RT. “Facebook testing AI to spot potentially suicidal members .” RT. . (2017): . website. http://bit.ly/2m05UDq

RT. “12yo livestreams her suicide, Georgia cops struggling to suppress tragic video .” RT. . (2017): . . http://bit.ly/2mjX6JM

Facebook. “Building a Safer Community With New Suicide Prevention Tools.” FB Newsroom. . (2017): . website. http://bit.ly/2mjVZtD

Mark Zuckerberg. “Building Global Community.” Facebook. . (2017): . social media. http://bit.ly/2lzqZkF

Robert Booth. “Facebook reveals news feed experiment to control emotions .” The Guardian. . (2014): . website. http://bit.ly/2myuj4I