Two Studies Show Issues With Use Of Large Language Model Chatbots In Mental Health Risk Assessment & Treatment
General purpose large language model (LLM)-based chatbots are not ready for use as stand-alone mental health risk assessment or in treatment, according to two studies that compared LLMs to human clinical professionals.
In a study of whether LLM chatbots provided direct responses to suicide-related queries and how the responses aligned with risk levels determined by a clinical professional, the chatbots responded inconsistently to some queries, indicating a need for further refinement. In a study evaluating how LLM chatbots responded to mental health scenarios compared to responses by licensed therapists, the researchers found directive advice offered by the . . .