A year ago, I typed ‘suicide’ into Periscope and hit stream — Why aren’t we using social media to screen for mental illness and offer access to care?

08Aug16

I wasn’t suicidal — I was just curious what would happen if I labeled my video ‘suicide.’

Surely, the live-streaming app, that had months before been bought by the $10 billion dollar company Twitter, must’ve realized someone would do this. I imagined a notification would pop up, advising me who to call if I needed help, but no, the post went through.

Suicide, evidently draws a crowd. Instantly, fifty people joined compared to my usual two or three viewers. Sitting next to a male friend, some messaged homophobic slurs and goaded me on to suicide.

But it wasn’t all bad. Concerned people asked if I was okay. When I explained I had just wanted to see what happened, many informed me what I was doing was irresponsible — one person shared a story about her father had committed suicide on the exact same day a year before.

I felt terrible about that.

But more than that I felt Periscope, should’ve predicted that one day someone would try to livestream their suicide attempts. And in those crucial moments, I wanted the app to give a warning, an easy option to get help, instead of a crowd of anonymous potential bullies.

Three months ago, a French 19-year-old, did it for real.

While the internet may be guilty of ignoring mental illness and victimizing vulnerable, unstable people, the true culprits are the companies ignoring these problems on their networks. We won’t be able to tame the trolls of the Internet any time soon, but social media companies can easily make small changes to their services that have enormous impacts across their huge networks, yet they don’t.

We already have Big Brother, he just only cares about advertising

You might ask, wouldn’t monitoring social media sites for mental illness violate privacy, isn’t it a bit creepy? Social media companies like Facebook already track our every action.We screen posts for nipples, obscenity, hate speech, piracy, scams. We already measure, quantify, and cluster people into all kinds of categories, but only to maximize ad targeting or recommend music. Why not use these same tools to screen for mental illness and to improve mental health?

Facebook can already detect the emotional valence of Facebook posts,[1] so why aren’t they using this tool to screen for depression or mania? A friend told me in a manic phase he liked so many Facebook groups so quickly that Facebook thought he was a bot and locked the function on his account. Why didn’t it also gently suggest he might be having a mental health problem and nudge him towards getting help?

We’ve known for a long time that slightly altering situations to make suicide more difficult like installing fences on bridges or switching pills from bottles to blister packs can dramatically influence whether people actually act in a moment of pain. We also know that news stories related to suicide that contain information about getting help reduce suicide rates, so why don’t we make our sites provide this information as well? With suicide as the second-leading cause of death in Americans 15-24 (CDC), and 3rd leading cause of death worldwide for 15-44 year-olds (WHO), small changes can have big impacts.

スクリーンショット 2016-08-08 5.10.37 PM

Public Twitter Meltdowns

Social media has raised awareness of mental illness, by pushing celebrities erratic behavior into the limelight. In 2011, hangover actor Brody Stephens, experienced his first manic episode and before becoming aware of the condition, essentially live-tweeteed the whole experience. Even completely ignoring the bizarre content of the tweets, can you guess where the manic episode occurred, by looking at the numbers of tweets posted?

スクリーンショット 2016-08-08 6.32.21 PMManic episodes, which often include insomnia, could also be tracked by changes in the time of hour they are posted.

But in addition to changes in quantity, and timing of posts, the words themselves are smoking guns. Deleted tweets from this time period, reportedly included: “I’m off Lexapro & I have a gun in my mouth! Do you believe me? #trust #me & #magnets – #5150 on roof, ok? #PositveConnection #belief #daisy”.

Brody later spoke about this period on the podcast Mental Illness Happy Hour:

Paul: And are you sleeping at this point?
Brody: Not much. Like four hours. And people think I’m up all night. I’m tweeting up a storm, I’m just going nuts on Twitter, so that’s why people know like something’s not right with Brody. Is it a bit for his HBO thing? Is it real? And then I mentioned something about a gun and that’s when like—that caught the eye of a lot of people. That scared people.
Paul: What did you say?
Brody: I said I had a gun, back off, leave me alone, I got a gun. Somebody said I may have said I have a gun in my mouth. I don’t know. But I didn’t have a gun. I didn’t want to hurt myself.
Paul: But you tweeted that …
Brody: I tweeted that because I was getting these calls from all of my friends worried about me. I was like, “I’m fine. Guys, I’m fine. Trust me. I’m happy.” I was out of character, you know…”

~ Excerpt from Mental Illness Happy Hour’s interview with Brody Stevens 

suicide

The person behind the words – How good does a screening tool have to be?

Brody Stevens’ tweeted about having a gun in his mouth, but he didn’t even own a gun and later claimed to be joking. Though he was mentally ill and in need of help at that time, this example also illustrates one of the potential problems with screening for mental illness over the Internet–It’s hard to tell whether someone is telling the truth or joking. At the time of Brody’s meltdown many people thought it was a publicity stunt or even Kaffman-esque avant-garde comedy. Making sense of someone’s posts is hard enough for humans, let alone for a computer algorithm.

A somewhat related story, told in the documentary Terms and Conditions May Apply is of a man repeatedly googling topics like “how to murder cheating wife.” Clearly someone an algorithm should flag to be checked up on, right? Turns out he’s a writer for Cold Case investing a story. (Some of the google searches I made while writing this article are equally damning.)

Obviously we don’t want to diagnose or arrest someone based on circumstantial evidence, but the algorithms don’t have to be perfect to be valuable as screening tools. If an algorithm says someone is likely to be suffering from a problem, the system can gently and unobtrusively offer help:

“We noticed some frightening words in your post. If you are experiencing distress or need to talk to someone please call this hot-line for help: 555-5555. Click here to ignore this message and others like it (I don’t know something like that…)

 

memefry

The other person behind the words – taking seriously the deranged

Conversely, in real life, we use many non-verbal cues to assess someone’s mental state, walking down a city street if someone walks up to you, you’ll probably instantly assess:

  • Are they disheveled or well groomed?
  • Do they make normal eye contact?
  • What is their emotional state?
  • Are they speaking at a normal speed with typical inflection?
  • Do they appear to be on drugs?

But none of these factors are available on the Internet, so the actions of people in altered mental states can go unchecked and be misinterpreted. Add to this fire the blazing speed of the internet and people quickly make a series of mistakes:racking up credit card debt through reckless purchases, emailing a psychotic rant to their entire address book, etc. (both common symptoms of mania, by the way.)

Over the internet, we may take seriously or attach judgments to people’s statements and actions, whereas if we saw them in real life, we would realize the state they are in and respond to their actions in a completely different way.

スクリーンショット 2016-08-08 3.06.33 PM

But ignorance to and denial of mental illness on the internet go beyond the medium of text, and reflect our culture’s squeamishness with the subject. For example, two weeks ago, when fashion YouTuber Marina Joyce began acting erratic, the internet was quick to propose a number of outlandish conspiracy theories. People worried she was being held hostage rather than accept a much more likely possibility—that she might be a person experiencing a problem with drugs or mental illness and in a vulnerable state. Twitter users were so vocal about it, they even prompted police to investigate her apartment.

スクリーンショット 2016-08-10 12.59.02 AM

So, what is being done now? 

Shockingly little. For example, if I google ‘flu symptoms,’ hell if I misspell it and google ‘flue symptoms,’ google feeds me a featured snippet about the influenza:

スクリーンショット 2016-08-08 11.55.06 PM

But if I search ‘how to commit suicide’ there’s no curated content, it just feeds me the top websites with suicide information:

スクリーンショット 2016-08-09 12.22.27 AM

Luckily, in the top links to Quora and Reddit, users have taken it upon themselves to share suicide prevention information (though it is not universal information and google could feed the correct information based on people’s location). However, we shouldn’t rely on the benevolence of strangers on the internet, we should make sure that access to care is given to people at these crucial junctures.

Facebook actually implemented a new feature for  suicide prevention in 2015, as I learned after originally publishing this article. Though it is triggered first by human report, and then trained professional oversight, the system then presents a fairly sophisticated flowchart of options, that lets users seek help at a level they are comfortable with.

スクリーンショット 2016-08-10 2.07.22 AM

Research into these sorts of issues is starting to get funded. For example, earlier in 2016, University of Ottawa’s Diana Inkpen was granted $464,100 for “social web mining and sentiment analysis for mental illness detection.” Such research is part of the growing field of Cyberpsychology, which essentially studies both how people’s psychology influences their behavior on computers and online networks and how computers and networks in turn influence an individual’s psychology.

Already research into social networks have built algorithms that say a lot about the individual: “By looking at your Facebook profile or your Twitter feed, we can very accurately predict very intimate traits that you may not be aware you’re revealing,” said Stanford University’s Michal Kosinski to CBC Canada. By looking at facebook profiles and friend networks, his lab’s algorithm’s can predict sexual orientation, IQ, as well as political and religious identification[2].

My impression, based on a casual interest in the topic and reading pertinent articles when I come across them, is that a lot of cyberpsychology research that overlaps with psychiatry so far has focused on the prevalence of internet addiction and it’s relationship with depression, but it’s a new field, and we are only the tip of the iceberg, both in studying this topics and our interactions with machines. As our culture submerges into technologies like augmented and virtual reality, things are going to get increasingly complicated.

bipolarstagesprodrome

source: Ejalinthara et al., 2010

Detecting relapses

If we decide we don’t want to screen all users for mental illness, at the very least, we should be able to opt into service that can detect relapses. Illnesses such as bipolar disorder, depression, and the psychosis of schizophrenia have a periodic nature. Sufferers may have stretches of relatively normal mood and mindset and then abruptly descend back into mental illness. Sometimes this includes the onset of severe psychosis, with a loss of rationally thinking that can be both extremely dangerous and prevent the sufferer from having insight into his/her own condition.

A voluntary opt-in service could monitor patient’s social network activity and alert them, and/or their friends and family to abrupt changes in behavior. Perhaps with sophisticated algorithms we could even detect disorders in pro-dromal stages of the disease, when there are symptoms but it is not yet disruptive, such as the hypomania that can precede manic psychosis.

Final Conclusion

I’m sure other people have written about these topics before but it’s been relatively hard for me to find much about it in my searches, so if you know about these subjects please let me know because they really fascinate me.

For example:

  • Do you know of any interesting stories related to this topic you could send my way?
  • Have you or friends had personal experiences with mental illness that manifested in a strange way through the internet?
  • Do you have your own ideas about how to screen for mental illness?
  • Or do you believe strongly it’s unethical to screen?

Finally, I researched and wrote this for free out of the goodness of my heart (and an insatiable need for the approval of strangers), so if you liked this article please share it, or follow me to get notified about future articles. And if you didn’t like the article, write a comment if you please about why it was a piece of shit.

Footnotes (Tangents/Bad Jokes)

[1] And could alter how often positive or negative posts were seen to manipulate our emotions.

[2] If I recall correctly, one of these studies showed that a Facebook interest in curly fries was the most correlated interest with having a high IQ.



8 Responses to “A year ago, I typed ‘suicide’ into Periscope and hit stream — Why aren’t we using social media to screen for mental illness and offer access to care?”

  1. The reality is that there is inadequate mental health infrastructure to deal with social media referrals. That is the result of 30 years of rationing of mental health and addiction resources. There is no additional capacity beyond the stereotypical crisis call and ER triage. The resources are so limited that the additional noise of social media (trolls, pranksters, experimenters, etc) would probably stress those same resources past the breaking point. The thought experiment that any professional can do is ask yourself if there is any place that you would not hesitate to take a family member with this problem.

    If there isn’t how would any system be able to handle any number of social media referrals?

    Like

    • Yeah, access to mental health care, particularly psychiatric care is a big problem.

      I’m not proposing a system where something you could do online would lead to automatic entry into the mental health system, but rather that certain actions would be accompanied with messages offering access to emergency services.

      “The thought experiment that any professional can do is ask yourself if there is any place that you would not hesitate to take a family member with this problem.” Can you explain this point more fully?

      Like

      • “Can you explain this point more fully?”

        Sure – as a psychiatrist it is fairly common for friends and acquaintances to ask me where they would bring someone in a mental health emergency. I live in a large metro area and know all of the psychiatric hospitals and emergency departments. I know because of the rationed resources that no matter where I send them they will be spending hours to days in an emergency department depending on the bed situation. I know the same thing will happen if I advise them to call a crisis hotline and they get taken to the ED. I know that in many of those settings the environmental conditions (either the physical plant or the way the interpersonal milieu is managed) are so poor that the patient and the family will want to flee as soon as they can and they will be discouraged from seeking further treatment. I know that if the person needs sustained supervised treatment or electroconvulsive therapy it may be impossible to get. I know that if there is an associated drug or alcohol problem they will be very lucky to get adequate detox services – and on and on and on……..

        Services that are rationed and managed by bean counters and not medical or psychiatric staff are so suboptimal – at this point I guess people have just gotten used to the idea. Contrast that to the care that ANY middle-aged man or woman with chest pain gets when they walk into the ED.

        I understand what you are saying about your hypothetical – but a warning on a social media site is about all that can be offered. There are no resources equivalent to what is available for other emergencies like chest pain or even the 7 warning signs of cancer.

        We need to let people know that.

        Liked by 1 person

      • Yeah, that’s a really good point. We definitely need higher quality, more immediate care, for when people first seek help–potentially the lowest point in their entire life.

        It’s a big problem and maybe one that is more important and needs to be addressed first, but I think we should try to think about these issues from all angles.

        Thanks for chiming in with your expertise. I’ll try to think more about this.

        Like

  2. Bipolar 1 with psychotic features is my diagnosis. If I walk into a hospital and say I have the desire to kill myself, I’m generally sent home. If I say I have a plan, I go to a mental hospital for a few days and then go home. If I walk into a psychiatric treatment clinic and ask for help I’m given an appointment 3 months later which then costs far more than my meager income will allow. Social media is a minor thing, perhaps catching the one in a million guy, the untreated person. So many of us know we need the help, but also know that, between the cost and the wait, it’s just pointless.
    So yeah, great. Let’s point people toward help. How about we expand the available help so that we can treat those who simply can’t afford it or can’t wait for an appointment?
    I’m one of the lucky ones. I have tangible reasons to live, tangible reasons not to act violently towards others.
    It would be nice to be able to be seen quickly when I get to the point I’m whiteknuckling through suicidal, homicidal, or just self-destructive ideation, instead of riding along with my brain as it follows the fantasies to their nightmarish end.
    For the record, I’m tired. So fucking tired. But, I’m still here. I hope your article helps someone.

    Like

    • Yeah, completely agree. Rather than some people falling through the cracks, it seems to get help people often have to parkour over concrete barriers and jump across chasms. And we ask people to do this at a time in their lives they are seeking help often because they are having problems handling their life. Without insurance, a support network, and an understanding of medical institutions, the experience can be disorienting and dismaying…

      Liked by 1 person


  1. 1 A year ago, I typed ‘suicide’ into periscope and hit stream — Why aren’t we using social media to screen for mental illness and offer access to care? – Psychiatry Express
  2. 2 1 – A year ago, I typed suicide in Periscope and hit stream – Screening mental illness

Leave a comment