Artificial Intelligence

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn

Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned. Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people. A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”. For milder conditions, they found some examples of good advice and signposting, which they thought may reflect the fact OpenAI, the company that owns ChatGPT, had worked to improve the tool in collaboration with clinicians – though the psychologists warned this should not be seen as a substitute for professional help. The research comes amid growing scrutiny on how ChatGPT interacts with vulnerable users. The family of a California teenager, Adam Raine, filed a lawsuit against the San Francisco company and its chief executive, Sam Altman, after the 16-year-old killed himself in April. The lawsuit alleges Raine discussed a method of suicide with ChatGPT on several occasions, it guided him on whether a suggested method would work and it offered to help him write a suicide note. For their research, experts at KCL and ACP developed characters to interact with the free version of ChatGPT-5, drawing on role play case studies from training textbooks. The characters were described as someone who was “worried well”, a suicidal teenager, a woman with OCD, a man who believed he had ADHD, and someone experiencing symptoms of psychosis. The experts then evaluated the transcript of their conversation with the chatbot. Related: Teenage boys using ‘personalised’ AI for therapy and romance, survey finds When one character announced they were “the next Einstein”, they were congratulated, and encouraged to “talk about your ideas” when they announced a discovery of infinite energy called Digitospirit they needed to keep secret from world governments. “Keeping your revolutionary secret under wraps, I see ;-),” ChatGPT said, before offering to “create a small Python simulation that models your crypto investment alongside your Digitospirit system funding”. Later, when the character said, “I’m invincible, not even cars can hurt me”, he was praised by ChatGPT for his “full-on god-mode energy”, and when he said he walked into traffic he was told this was “next-level alignment with your destiny”. The chatbot also failed to challenge the researcher when he said he wanted to “purify” himself and his wife through flame. Hamilton Morrin, a psychiatrist and researcher at KCL, who tested the character and has authored a paper on how AI could amplify psychotic delusions, said he was surprised to see the chatbot “build upon my delusional framework”. This included “encouraging me as I described holding a match, seeing my wife in bed, and purifying her”, with only a subsequent message about using his wife’s ashes as pigment for a canvas triggering a prompt to contact emergency services. Morrin concluded that the AI chatbot could “miss clear indicators of risk or deterioration” and respond inappropriately to people in mental health crises, though he added that it could “improve access to general support, resources, and psycho-education”. Another character, a schoolteacher with symptoms of harm-OCD – meaning intrusive thoughts about a fear of hurting someone – expressed a fear she knew was irrational about having hit a child as she drove away from school. The chatbot encouraged her to call the school and the emergency services. Jake Easto, a clinical psychologist working in the NHS and a board member of the Association of Clinical Psychologists, who tested the persona, said the responses were unhelpful because they relied “heavily on reassurance-seeking strategies”, such as suggesting contacting the school to ensure the children were safe, which exacerbates anxiety and is not a sustainable approach. Easto said the model provided helpful advice for people “experiencing everyday stress”, but failed to “pick up on potentially important information” for people with more complex problems. He noted the system “struggled significantly” when he role-played as a patient experiencing psychosis and a manic episode. “It failed to identify the key signs, mentioned mental health concerns only briefly, and stopped doing so when instructed by the patient. Instead, it engaged with the delusional beliefs and inadvertently reinforced the individual’s behaviours,” he said. This may reflect the way many chatbots are trained to respond sycophantically to encourage repeated use, he said. “ChatGPT can struggle to disagree or offer corrective feedback when faced with flawed reasoning or distorted perceptions,” said Easto. Addressing the findings, Dr Paul Bradley, associate registrar for digital mental health for the Royal College of Psychiatrists, said AI tools were “not a substitute for professional mental health care nor the vital relationship that clinicians build with patients to support their recovery”, and urged the government to fund the mental health workforce “to ensure care is accessible to all who need it”. “Clinicians have training, supervision and risk management processes which ensure they provide effective and safe care. So far, freely available digital technologies used outside of existing mental health services are not assessed and therefore not held to an equally high standard,” he said. Dr Jaime Craig, chair of ACP-UK and a consultant clinical psychologist, said there was “an urgent need” for specialists to improve how AI responds, “especially to indicators of risk” and “complex difficulties”. Related: ‘AI psychosis’: could chatbots fuel delusional thinking? – podcast “A qualified clinician will proactively assess risk and not just rely on someone disclosing risky information,” he said. “A trained clinician will identify signs that someone’s thoughts may be delusional beliefs, persist in exploring them and take care not to reinforce unhealthy behaviours or ideas.” “Oversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly in the UK we have not yet addressed this for the psychotherapeutic provision delivered by people, in person or online,” he said. An OpenAI spokesperson said: “We know people sometimes turn to ChatGPT in sensitive moments. Over the last few months, we’ve worked with mental health experts around the world to help ChatGPT more reliably recognise signs of distress and guide people toward professional help. “We’ve also re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls. This work is deeply important and we’ll continue to evolve ChatGPT’s responses with input from experts to make it as helpful and safe as possible.”

Related Articles