texting, boy, teenager-1999275.jpg

The future of therapy: Ethical concerns and what to look out for

Selling your Protected Health Information (PHI) for profits. AI bots attuning to your emotional needs? The global mental health landscape has exploded since the COVID-19 pandemic entered our lives and it’s evolving rapidly…

On one hand, this has been amazingly positive for de-stigmatizing mental illness AND the universal need for emotional wellness, educating people across generations and cultures that it’s OK to and how to ask for help, and making mental health information and services more widely available than ever before! 

BUT…how do we regulate healthcare practices to be legally and ethically sound, when they roam the wild west of cyberspace? Can we? Let’s talk about it. 

Starting with some breaking news this month, BetterHelp, the tech giant that contracts with licensed mental health providers in the U.S. to provide counseling services to clientele around the world (which is interesting…because anyone licensed in the U.S. knows that there are legal limitations to practicing outside of your own state…but hey, maybe Norway, Canada, and the UK are justing trying to be the cool parent?), got slapped with a $7.8 million dollar fine by the Federal Trade Commission for “deceiving consumers” by selling their sensitive mental health info to the social media piranhas that monitor and curate our content exposure.   

So just to be crystal clear here – BetterHelp wasn’t just sharing general demographics (which would still be unethical), but intimate details about their consumers’ mental health histories (e.g. histories of depression and suicidality), after repeatedly assuring consumers it would not disclose such data for profit purposes! Despite this, according to the FTC press release, “BetterHelp used and revealed consumers email addresses, IP addresses, and health questionnaire information to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes, according to the FTC’s complaint.”

The implications of this betrayal are vast and complex. Those who enter treatment with complex trauma, abuse, or sexual assault histories are already struggling to cope with the abuses they have endured in life. Further lying, exploitation, and violation can not only cause harm to people in their most vulnerable moments, but set them back in their healing journey to an inexplicable degree. 

If you have been harmed by the actions of BetterHelp in any way, the FTC is accepting Public Comments on this matter until April 13, 2023 – Please let your voice be heard!

bot, colorful, robot-4878002.jpg

So what about Bots? That’s right, it’s no surprise the tech industry is moving fast on developing (some are even available right now across the globe) AI mental health chatbots, reportedly meant to help meet the explosive demand for mental health services. The COVID-19 pandemic created a perfect storm of overlapping traumas for the entire world; the immense loss of human life, jobs, financial stability, connection to support networks, and overall sense of safety I our world has taken a swift and lasting toll. The most vulnerable (i.e. BIPOC, LGBTQ+, AAPI, low-income) communities have been hit the hardest, with political and social upheaval only adding further to their suffering. So yeah – PEOPLE NEED HELP! 

So what’s the problem? All these online therapy platforms are more affordable than most traditional therapy settings. For example, BetterHelp starts at $60-90/week while many AI Therapy Apps are free or charge a comparatively nominal fee. The current average fee for traditional therapy (in-person or telehealth) services is $100-200/hour, sometimes more depending on what kind of treatment or speciality you’re seeking. Online platforms have a bunch of perks outside of affordability too, including flexible communication modes, little-to-no wait times, and on-demand availability. Some Research even suggests people can feel more comfortable opening up to a “virtual” presence than another human.

Well the problem is that there are some major implications (and potentially harmful consequences) that the public may not be fully and responsibly informed about. 

  • Exhibit 1 – The BetterHelp Breach. Look, I get that we all, to some degree, understand that our Internet activity is being tracked and used to influence us. But when a company outright deceives its users who expect (because they’re assured) their personal data to be protected and private, it’s a violation. It causes harm and is deeply unethical. Healing from harm done unto people is a core principal of mental healthcare – not perpetrating it.   
  • Therapists are overworked and underpaid. According to Indeed, therapist at BetterHelp make $29.48/hour (15% below the national average!) Do you want a therapist who is expected to be available to their clients on an “as needed” basis? Think about what that would mean – how many clients could they really balance without becoming burnt-out or compassion fatigued? How would you feel if your therapist was having a hard time paying attention, finding patience, come across as judgmental or start talking about themselves more than they ask about you?! Well, those are all common consequences of being overworked and underpaid in this profession.
  •  How about these bots? I mentioned earlier how recent research reported that some people prefer non-human helpers, but let’s be real, authentic human connection is not replaceable. When the tech company Koko decided to (without approval from the Institutional Review Board) investigate how bots, “supervised by humans”, could improve mental health services to their customers, some issues became obvious. In an interview with Gizmodo, the co-founder of Koko, Rob Morris, reported that the “formulaic” nature of an algorithm became obvious to users and in a tweet, Morris admitted “once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.”  So are you really getting therapy, or is this something else?
  • For clinicians and researchers in psychology, the implications for those who need specialized, culturally-informed services is a real concern with mental health chatbots. Misunderstanding is a normal part of communication. However, if a bot misunderstands, it can send a user potentially harmful messages, especially in crisis situations. Moreover, users might mistakenly assume their bot is meant to replace a therapist, rather than aid in ongoing professional therapeutic services. If the impression is that bots are therapists or are providing therapy, and the user has a bad experience, it may even deter some people from seeking professional services all together.  

So look, the world of psychology is evolving rapidly. Efforts to increase the affordability and accessibility of effective mental health services is an absolute need. But like most things in the 21st century – the speed at which we learn about potential advances is sharply faster than the speed at which said advances are actually ready to be safely (i.e.: proven effective and valid) utilized by the public. 

Thanks for reading – Stay curious, open, and discerning. 

Leave a Comment

Your email address will not be published. Required fields are marked *