AI-Chatbot Therapists Responded Inappropriately To Users 20% Of The Time
Large language models (LLMs) used as “artificial intelligence” chatbot mental health therapists responded inappropriately to users 20% of the time, according to recent research. The LLMs expressed stigma toward those with mental health conditions, and the LLMs encouraged delusional thinking, according to the researchers. The goal of the research was to assess the ability of LLMs to reproduce and adhere to the accepted structures of therapeutic relationships as presented by therapy guides used by major medical institutions. The researchers concluded that LLMs should not replace human therapists due to safety reasons and because a therapeutic alliance requires human characteristics. . . .