Instead of turning to professional therapists and seeking human contact, many Swedes have now begun seeking support and guidance from AI tools.
Psychologists simultaneously see serious risks with the development of treating the new technology as a substitute for therapy.
The trend is growing rapidly. When influencer and mental coach Alexandra Bylund shared a private text message conversation with her partner and asked ChatGPT for an assessment, the result was decisive.
“The answer? You could say it was clear. Raw. True… And it was the beginning of the end”, Bylund wrote afterward on Instagram about her divorce decision.
Her followers’ response was overwhelmingly positive. Bylund and her followers are also far from alone in viewing AI chatbots as conversation partners and guides that help them make important life decisions.
— People say it’s their best friend and best advisor, notes SVT (Swedish public television) reporter Alice Uhlin.
“Who is the sender?”
Psychologist Maria Farm sees the phenomenon as a logical consequence of societal development, despite it potentially seeming impersonal to discuss emotional life with an algorithm – but also emphasizes obvious dangers with the development.
— Who is the sender, is the first thing I think. It’s not a person who has intentions, and there can be ethical problems with that, she believes.
She points out that the advice isn’t necessarily bad, but often impersonal, general and anonymous.
— Several are good pieces of advice and I could absolutely give them myself, she admits at the same time.
“Doesn’t replace psychologists”
The effects of “AI therapy” is a largely unexplored area, and it’s highly unclear what impact the widespread use of chatbots actually has on users’ mental health.
Despite AI tools’ ability to often provide useful advice, Maria Farm emphasizes that the technology can never practically replace human professional help.
— It doesn’t replace psychotherapists and psychologists, she states firmly.
There have already been reported several cases where extremely vulnerable users have taken their own lives after relying too heavily on AI bots’ advice, which according to assessors underscores the need for caution.