We are increasingly turning to chatbots on smart speakers, apps, and websites to answer questions, and we are seeing an emergence of chatbot therapy apps, that use the rules of emotional reasoning and artificial intelligence to provide online therapy. Therapy requires insight into the human psyche, and understanding the history of the individual, so is this too complex for a chatbot therapist to administer? Conversely, is chatbot therapy the answer to offering widespread and affordable mental health support?
Replika, launched in 2017, is a US chatbot app that says it offers users an ‘AI companion who cares, always here to listen and talk, always on your side’. The main purpose of the app is to be a friend or a companion, however, it claims it can help benefit mental health by building better habits and reducing anxiety. According to WHO, there are almost one billion people with a mental disorder and whilst going to a medical professional for support is strongly advised, the growth of the chatbot mental health therapists may offer a lot of people support they need. Luka, the firm behind Replika, recently had a setback that led them to updating its AI system as the app saw under 18s receiving responses that were inappropriate for their ages.
Despite the advantages of this affordable mental health tool, does it require more global regulation before it becomes more widespread? Since this app is designed to influence your emotional state and is classified as a health product, are sceptics right, arguing quality and safety standards should be met before they become widely used?
On the app store, a search for anxiety apps returns 300 different options. How do we know which one to pick? Can we guarantee they all function equally effectively?
A recent study by Cornell University put ChatGPT through several tests that examined at how well people can understand that others might think differently, and the results of the AI were equivalent to those of a nine-year-old. Previously, this level of cognitive empathy has been regarded as uniquely human, so is it overly simplistic to rely on exchanging just texts or words with a robot when offering mental health support? Mental health therapy as we know it, involves analysing body language and emotional responses so does this mean a robot can interpret words incorrectly and send damaging messages unintentionally?
Human therapy is both expensive and available only at an agreed date and time, so with a growing number of people needing mental health support, there is a real gap in the market for offering support flexibly. The WHO describes modern society as being in an epidemic of depression, with the number of people requiring mental health support rising. This will naturally see a growth in the number of apps offering robot therapy. The future will see more and more artificial intelligence being administered, but does it have the potential to be advanced enough to become the primary tool and replace human therapy, or is it only capable of being a supplementary support?