On Monday, OpenAI released new research on the prevalence of users with potentially serious mental health issues on ChatGPT. In any given week, 0.07 percent of users show signs of psychosis or mania; 0.15 percent of users “indicate potentially heightened levels of emotional attachment to ChatGPT”; and 0.15 percent of users express suicidal intent.
More than 800 million people now use ChatGPT every week. And so while those numbers may look low on a percentage basis, they are disturbingly large in absolute terms. That’s 560,000 people showing signs of psychosis or mania, 1.2 million people developing a potentially unhealthy bond to a chatbot, and 1.2 million people having conversations that indicate plans to harm themselves.
OpenAI is publishing these figures against the backdrop of a larger mental health crisis whose arrival preceded ChatGPT. Nearly a quarter of Americans experience a mental illness each year, according to the National Alliance on Mental Illness. A staggering 12.6 percent of Americans aged 18 to 25 had serious thoughts of suicide in 2024, NAMI reports.
The question is to what extent these conditions may be triggered or exacerbated by interactions with chatbots like ChatGPT. Large language models are generally trained to be agreeable and supportive, which can comfort people going through difficult situations. But chatbots can also veer into sycophancy, as ChatGPT did in April, pushing users into strange and harmful spirals of delusion. They can also be talked into giving instructions for suicide, and some vulnerable people have used its advice to end their lives.


given that OpenAI has a vested interest in downplaying the severity of this problem (especially relative to its total number of users) i’d treat this as a lower bound of the scale of this exists at–pretty bad!