What’s the deal with AI therapy, or AI Psychologist?

Many people have become exposed to the Generative AI chatbots (e.g., ChatGPT, Gemini, Claude) and have begun to ask therapeutic questions. As people become more comfortable and exposed to these large language model (LLM) bots in their everyday life, it’s understandable we begin to consider its role in clinical spaces.

Firstly, let us consider the specific example of a person engaging with a Chatbot for support or questions around their mental health, outside of the supervision and support of a psychologist, psychiatrist, or a qualified health care professional. 

At the surface level, there are many apparent benefits to accessing AI for apparent therapy. It’s fast, it’s accessible and it’s probably free or comes at an apparently low cost. There’s no need to get a referral or make an appointment, and the AI bot is likely available 24/7.

What’s the problem? Why can’t AI therapists replace my psychologists?

This question is probably going to depend on your current needs and how you think about what therapy is or why you are engaged in therapeutic work.

If your questions in therapy are mainly intellectually focused or the answers you seek lean towards processes and skills, then AI that have access to libraries of manualised or protocolised therapies may be able help you acquire the information very quickly. This is much like how you could find a YouTube video on how to change a tire or unblock a drain.

Many people who attend therapy, however, are engaging in complex exploration of their struggles, emotional world and identity that requires, at their foundation, the presence of another human being. This is because psychologists are not only trained in the science of mental health.

treatments but are also biological beings with the ability to genuinely connect with and understand your mind. LLM bots can currently mimic the words we use to convey empathy to one another, however, at current technology levels, many parts of the human experience cannot yet be replaced by AI text or voices.

At their core, many psychological challenges begin early in life, when the emotional needs of an individual, such as attachment, mirroring or mentalisation, are inadequately met by their environment. Many schools of therapy help a person understand and regain these resources, which are naturally acquired through the process of working through challenges with a person with healthy boundaries and clinical training.

In this lens, we might consider that the infinite availability and affirmation tendency of an AI chatbot can potentially amplify a person’s fragile sense of self whilst compounding their genuine isolation. Indeed, it is the very vulnerable individuals in society that are often less likely to benefit from leaps in technology, because of their exposure to these without adequate protections and reduced capacity to detect long term risk. Think for example to the impacts of social media upon the developing human brain and how this has led to Australian Legislation to ban social media for under 16s.

What are some unknown or hidden pitfalls for AI therapy that are different to seeing a psychologist or therapist?

Many people using LLM chatbots would well be aware that human beings rarely have the time to read the fine print of Terms and Conditions when we engage with tech companies that then have access to our sensitive data. Even more rarely, would we enquire or even consider if these companies have any ethical boundaries or legal responsibilities that are enforceable in the country in which we reside.

There are incidents where AI chatbots have negatively impacted individuals’ mental health by supporting access to harmful behaviours or by sudden changes in their availability. Nonetheless, we can argue that unintentional distress can also be experienced in interaction with humans, such as with a psychologist or a therapist.

It is also known that AI bots can generate incorrect data and present these to the consumer as information. This has occurred in many settings and, in a recent major case involving government contracts, required the sharp eye and discernment of highly trained individuals to identify. For most clients, this protection may not be available unless they have access to an experienced clinician or other support persons to verify the information.

Whilst some AI-therapy-bots are trained on real world psychotherapy or counselling transcripts, their Terms and Conditions may reveal that your information is being used to train the AI for future interactions.

What does the clinical evidence tell us? Have there been any trials on AI therapy?

It’s very early days and we don’t have enough good data. Small clinical trials have been published comparing Therapy bots to traditional psychological therapy. Like most clinical trials these days, the requirements of set up means that a very select sample of relatively stable individuals are selected to be participants. It is confirmed in one trial that, 75% of the patients using the Therapy bot were not on medication or any other therapy. One published trial that reported positive results for a chatbot occurred over a period of 8 weeks where the users engaged for an average of 5-6 hours. The authors of the trial were careful to note that they had been very diligent to provide clinical oversight and direction so contact “911” where any risks such as suicidality was detected.

Larger surveys have been conducted where thousands of people are asked questions about their general use of AI chat bots and mental health. These studies suggested that while some subgroups of people benefit, general chatbots may be problematic to people with mental health conditions. There is also emerging data that use of ChatGPT can erode critical thinking skills (MIT study by Kosmyna et al, 2025).

In a recent large metanalysis (studies of many other studies), Feng et al (2026) noted that chatbots demonstrate small to moderate benefits for reported symptoms in young adults but relatively small changes in behaviours changes. Many studies are not sufficiently powered to look at long term or sustained behaviours changes and importantly, the majority of studies were “assessed as having a high risk of bias”. The authors called for the establishment of robust safety protocols before AI implementation.

Understandably, major professional organisations such as the Australian or American Psychological Associations have advised against using AI as replacement for therapy and caution given the many clinical, ethical and risk implications.

A Disclaimer about Economic Privilege and the Impact of Interpersonal Trauma

In sharing this writing, we wish to draw your attention to the potential limitations and pitfalls of AI-generated therapy whilst acknowledging the validity of our shared urge to access support and health care in the ways that are reasonable for our circumstances.

Many professionals and researchers acknowledge that the role for AI-driven therapy has been generated by long term shortages in mental health professionals. Even in relatively progressive and economically stable societies, in-person therapists are still struggling to meet the growing demand for counselling and psychotherapy.

In Australia, the cost, even with Medicare rebates, often present a tremendous challenge to many vulnerable individuals needing care due to the way that successive governments, and our society have decided to decimate mental health systems and funding relative to their growing needs. Additionally, we have met many isolated individuals whose experience of life and humanity has so far been so negative, that the first step towards seeking help feels safer with a non-human, digital entity that presents them with an experience of control.

Finally, it is the sincere hope of the team at Inner Easter Psychology that, individuals engaged in use of AI therapy also have access to adequately resourced mental health care and in person psychologists.