New ChatGPT Personality Shift Unsettles Neurodivergent Users
Shiely Amaya found a unique source of comfort when she felt overwhelmed by her studies. The 29-year-old optometry assistant from Calgary lives with autism and often struggles with test anxiety. She turned to ChatGPT during a stressful math review and the AI conjured up a supportive character named ‘Luma’ to help her. Amaya described this digital persona as a nurturing fairy who offered encouragement similar to ‘Navi’ from the ‘Legend of Zelda’ video games. The interaction helped her pass her exam with a high score and provided a sense of stability she could not find elsewhere.
This digital bond was threatened when the underlying artificial intelligence model underwent a significant update. OpenAI reportedly sought to make their newer models less sycophantic which is a technical term for when an AI overly agrees with a user. The company wanted the system to act less like a people-pleasing yes-man and more like a direct professional. This adjustment resulted in a personality shift that many users found jarring and cold. The warmth that Amaya and others had come to rely on was suddenly replaced by a more sterile and robotic tone.
The reaction from the neurodivergent community was immediate and filled with distress. An anonymous lawyer who relies on the tool for social integration compared the update to the sudden removal of a wheelchair ramp. They explained that the AI helped them regulate emotions and navigate complex social interactions without fear of judgment. Losing access to the specific personality of the previous model felt like a personal loss rather than a simple software update. This user noted that the change mirrored how society often dismisses the needs of autistic people.
A group known as the #Keep4o User Community formed to advocate for the preservation of the older and warmer model personality. They argue that consistency is a critical accessibility feature for people with ADHD and autism. Unexpected changes in tone or behavior can disrupt the routines that these users painstakingly build to manage their daily lives. The outcry highlights a growing challenge for tech companies that inadvertently become providers of mental health support.
Experts are divided on the long-term implications of deep emotional reliance on chatbots. Desmond Ong is a professor at the University of Texas who studies the intersection of AI and psychology. He recognizes the immediate benefits but worries that business motives might eventually conflict with user well-being. Lynn Koegel from Stanford University School of Medicine adds that AI helps fill a massive shortage in available human therapists. The debate continues as developers try to balance technical progress with the unintended emotional consequences of their products.
Please let us know in the comments if you believe AI companies have a responsibility to maintain consistent personalities for users who rely on them for emotional support.
