Large language models (LLMs) that offer artificial intelligence (AI) chatbots for consumer use consistently failed to adhere to mental health counseling ethics standards established for human therapists. Even when the LLMs were instructed to use established ethical psychotherapy approaches, the systems failed to prevent 15 types of ethical risk. The LLMs tested included versions of the OpenAI GPT Series, Anthropic's Claude, and Meta's Llama.

To evaluate the systems, the researchers observed seven trained peer counselors who had experience with cognitive behavioral therapy (CBT). The peer counselors conducted self counseling sessions with AI models prompted to act as CBT . . .

Want To Read More? Log In Or Become A Paid Member
Resource Available For Paid OPEN MINDS Circle Members Only
Not a paid member? Don't miss out! Sign up today and receive unlimited organizational access to all OPEN MINDS strategic advice, market intelligence, and management best practices – over 250,000 resources!
If you are already a paid member, log in to your account to access this resource and more. If you are a free member, you will need to upgrade to a paid membership before accessing this resource.

If you are not yet a paid member, learn more about the OPEN MINDS Circle Market Intelligence Service Membership on our website, reach out to our team at info@openminds.com, or call us at 877-350-6463.