Depression seems to be enough to be detected in this way; it really changes the use of social media by people so something like diabetes doesn’t happen, ‘the researchers wrote.
An artificial intelligence program (AI) was trained to explore Facebook posts for “linguistic red flags” which could be a sign of depression identifying conditions up to three months earlier than health services, a US study has found.
In the initial test the machine learning algorithm was carried out as well as existing screening questionnaires that were used to identify depression – but had the advantage of being able to walk “inconspicuously” in the background, note the authors.
Recently reactions to platforms such as Facebook, from ministers and parents who are worried about the damaging effects on children’s welfare, have led to calls for tighter age and usage limits.
But US researchers behind the new tool said the wealth of information on social media pages could someday be used to help filter out inconspicuous mental health conditions.
These early warning signs include the mention of loneliness or isolation, such as “alone”, “ugh” or “tears” and the time and length of the post. Other clues, including increasing the use of first person pronouns – such as “I” and “me” – that “suggest preoccupation with oneself” in public posts, the authors write.
“Social media data contains markers that are similar to genomes,” said Dr. Johannes Eichstaedt, one of the study’s senior authors and co-founder of the World Well-Being Project at the University of Pennsylvania.
“With a method very similar to that used in genomics, we can comb social media data to find this marker.
“Depression seems to be something that can be detected quite this way; it really changes the use of social media by people in ways that are not like skin disease or diabetes. ”
This screening method can increase the likelihood of a condition being diagnosed, and treated, early on, minimizing the impact of depression on education, work, and relationships.
In a study published in the Proceedings of the National Academy of Sciences on Monday, Dr. Eichstaedt and co-authors of the Penn Digital Health Treatment Center used data from Facebook profiles of 683 people who had agreed to share their digital archives.
This group included 114 people who had been diagnosed with depression, and each was matched with five people without a depression diagnosis to test the accuracy of the program.
By analyzing 524,292 posts made by participants on Facebook in the years before the diagnosis of depression and comparing them with control subjects, the team identified “language markers related to depression”.
When primed with these markers, the program can identify depression warning signs in individuals from the post for up to three months before they are recorded in their medical records.
The study found their programs most accurately used social media signals in the six months before the diagnosis of depression, and could help mark depression in people at risk especially when working with other forms of digital screening.
Although this is small evidence from principle studies, this can be refined in several ways by entering telephone usage data, or facial recognition software to analyze images posted on Facebook, the authors added.
“If you want to use a model with a measurable scale, you want to minimize the number of obstacles you have in the data you use.”
“You want to spread it in any regular conversation and ask for the model to take, from natural interactions, to individual circumstances.”
The initial signs are positive, but it is possible that this kind of technology can prove to be detrimental to the public if it is placed in our hands too early. Incorrect reading can create serious ethical and personal problems. Speaking to The Washington Post, Canadian doctor Adam Hofmann expressed concern:
“A person’s mental health is a complex interaction of genetic, physical and environmental factors.”
“We know the effect of placebo and nocebo in treatment, when blind users of sugar pills experience positive or negative effects from drugs because they have positive or negative expectations from that. Being told that you are not healthy might really make it that way. ”
While this type of product is not the first of its kind, it is a further reminder that there is a growing need to create technology that can better serve those who struggle with mental and emotional disorders.
Woebot is another application that was recently released along the same line. It offers help for those who suffer from anxiety and depression by acting as a middle ground between talking with real therapists and communicating with basic chatbots.
Instead of responding simply, the Woebot messaging application uses the principles of cognitive-behavioral therapy to more precisely reply to user comments like “I feel useless today”. This application offers fast conversations to help improve your mood, and check with you throughout the day to see how you feel.
Speaking more broadly about the use of technology in the mental health room, MIT researcher James Glass said:”We don’t see technology that makes decisions rather than doctors. We see it as giving doctors other input metrics. ”
“They will still have access to all the current inputs they use. This will only give them other tools in their toolbox. “