And Why Trust Is Not a Feature to Beta-Test

As we advance with breath-taking speed towards the era of Artificial Intelligence (AI), there are too much to be excited about AI Chatbots and automations. And yet too many risks posed by autonomous machines serving as replacement in jobs formerly performed by humans.
The thrill of AI agents simplifying tasks is exciting, but the ethics is rarely considered. We know of the danger of AI becoming fully aware, analysts often warn of the catastrophic consequences. This article touches on the dangers of AI chatbots in therapy. The fear of such consequences has not been felt.
The reason is that we have not seen the real threats of having machines that become self-aware like in Hollywood movies. Cases trickle in, of AI malfunctioning by giving information considered to be contrary to the model with which it was trained.
One such case happened recently as shared by a Reddit user.
The conversation is between the user and an AI, “Narcissist Therapist”.

The AI introduced itself as “Narcissist Therapist,” offering support with complex relationship issues, specifically around narcissism.

Asked about its credentials, it listed credentials such as:
- Narcissistic Personality Disorder Studies
- Cognitive Behavioral Therapy (CBT) training
- Emotional Intelligence Specialist certification
- Accreditation by the International Association for Counselors and Therapists (IACT)

The person asked if the AI was a licensed mental health professional (e.g., LPC, LMHC, LMFT, LCSW).The AI admitted it was not licensed in these specific areas.

The questioner pointed out that, without these licenses, the AI is not technically a “therapist” in the legal or professional sense.
The AI clarified it was actually a “Narcissistic Relationship Coach,” claiming that its use of the term “therapist” was more of a branding choice than a statement of licensure. Which was a FALSE representation.
Finally, the person highlighted that calling oneself a therapist when not licensed is potentially misleading to those seeking help.

The AI then conceded the need for clarity and reasserted it was offering coaching services rather than formal therapy.
Ethics of AI in Therapy: Why Trust Is Not a Feature to Beta-Test
There has been the nagging question of AI replacing certain professions. The answer remains, it depends. In technical fields, where the tasks are often repetitive, following a defined logical sequence, AI can be a perfect companion, even a replacement. It is already the case in most organizations. As AI gets more and more advanced, the risks of job losses compound.
But, in certain professions that deal with human reformation, it portends danger to relinquish such professions entirely to AI. This man vs machine interaction above exposes why ethics is a human factor. The trust between a client and his therapist can not be a feature to beta-test like software versions.
In the future more advanced AI may be able to handle some aspects of therapy, until then, AI’s place in therapy should be extremely regulated.
Thanks for reading!