Although each of us considers such a solution for all life problems at least once in our life, we can’t deny the horror of this act, especially when it’s committed by a youngster. It doesn’t look like our religious convictions or any cultural considerations give us those shivers when we read news about suicide. I think it’s just our nature that protests against the very idea of self-destruction.
However, the statistics continues to shock. Till nowadays suicide has been reported to be the second cause of death among adolescents in the United States, South-East Asia, Europe, and Africa. The first one is road injury. In May 2017 World Health Organization introduced numbers that are really difficult to take on trust at once: more than 1.2 million teenagers die yearly around the globe. About a half of these deaths are self-killings. But all of these deaths are preventable.
While policy-makers implement new road rules and driving regulations, while doctors are inventing the cure for HIV, experts in Artificial Intelligence have also joined the ranks of specialists who are developing effective measures to prevent suicide.
What’s more, Facebook has recently reacted to the number of livestreams of its users’ suicide attempts by applying the AI. Believe it or not, Mark Zuckerberg commented that a special algorithm had already saved more than 100 lives.
So, will high technology be able to shake the WHO’s statistics and help us fight depression and suicidal ideation? Facebook and Google believe it will!
Artificial Intelligence Algorithm to Monitor Users’ Suicidal Thoughts
Facebook is now using it to check the whole network for any hints that signal its users’ intention to end their lives. The AI is aimed at scanning social media posts, comments on them, and even chats via Messenger to discover any patterns that may be evidence of potential suicide or other harmful thoughts.
Although it’s been noted that people who are at real risk of committing suicide would rather use the words “pills” or “bridge” instead of the term itself, it mustn’t be a problem for the AI to understand the general context and “read between the lines”.
After the algorithm indicates anything worrying, it will notify community managers. They will continue the investigation and identify the users who can be considered as potential suicide victims. Such people will be offered to contact a helpline, talk to a friend, and look through specially-selected suicide-prevention online resources. The icing on the cake is that the users can’t do anything with the information window – they simply have to choose one of the options.
At the same time, if the AI defines a case as “less urgent”, it will recommend friends contacting the person in need and discussing the issue with her. Also, with the help of a specially launched tool, friends are now able to report posts that concern suicide attempts. In its turn, Facebook will provide prevention resources in its news feed. As the company partners with counseling services and self-help organizations, the support will be truly professional. Thus, both the potential victim and the reporter can instantly message to one of counseling lines to get necessary and timely assistance or advice.
Plus, Immediate Support from Google and Siri
Google is also concerned about the problem. The company has designed a knowledgeable panel along with a private screening depression test which gets launched if a user searches for the symptoms of clinical depression or any info on suicide. However, the system is still imperfect: along with contacts of support organizations and counseling lines it offers “suicide guides”. Hopefully, Googlers will fix that.
Siri from Apple seems a bit smarter. If you ask it to show you something about depression or suicide, it will instantly suppose you need a suicide prevention center and throw out to you a list of helpful resources.
Any Controversy and Criticism?
Nothing can go without these two. Privacy is the greatest concern when it comes to the application of Facebook’s AI algorithm and Google’s panel.
Speaking about Facebook, this social network features certain regulations on the content of textual and video posts. Suicide is one of the topics that can be censored by its administration. However, taking into account the latest live broadcastings of youngsters’ suicide attempts, Mark Zuckerberg admitted that the company would take the risk of allowing such livestreams in order to give a person’s family and friends an opportunity to intervene and help before it’s too late.
Nevertheless, that’s only the tip of the iceberg. Currently the algorithm isn’t available in Europe, to say nothing of other countries where data privacy rules are weaker than in the United States. Besides, now the AI is able to understand only English patterns of suicidal thoughts. What about Spanish or Russian ones? US Spanish and Russian-speaking communities are quite large. Will Facebook be able to protect them?
Plus, the algorithms used by social media companies to monitor mental health of their users should definitely be studied, regulated, and updated very carefully so they won’t once act in contrast to their initial purposes.
Google also promised that it wouldn’t link users’ identities to the results of the test. Instead it’s going to collect large amounts of anonymized data, hence improving the user experience and discovering hints on its users’ depression.
A Word of Conclusion
The AI has to accept multiple challenges. Nevertheless, Mark Zuckerberg and his team are right when they claim that the algorithm is much better at spotting suicidal thoughts online than a human. By connecting people with experienced counselors and set within positive scientific and ethical framework, the AI really can contribute to the prevention of the phenomenon on a global scale.
Keep fingers crossed.