New OpenAI Model GPT-5.2 Faces Backlash Over “Downgraded” Intelligence
OpenAI recently celebrated its 10th anniversary by launching GPT-5.2, a model touted as a significant leap forward in professional AI capabilities. While company officials described the release as their biggest progress in years, the public reception has been surprisingly negative. Instead of the expected acclaim, the new model was immediately met with a wave of criticism from users who feel it performs worse than its predecessors.
A primary point of contention lies in the model’s handling of basic common sense. In tests like SimpleBench, users reported that GPT-5.2 struggled with trivial tasks that rival models handled easily. For instance, it frequently failed simple questions such as counting the number of letters in a specific word, a task where Google’s Gemini 3.0 reportedly remains stable.
The criticism extends deeply into the coding community as well. Developers have pointed out that code generated for visual simulations, such as traffic lights, appeared drastically oversimplified compared to previous iterations. Even creative outputs like ASCII art were described as a regression in quality relative to the older GPT-4o model.
Emotional intelligence and conversational fluidity have also come under fire. Users attempting to discuss sensitive topics, such as panic attacks, reported receiving cold or contextually inappropriate responses like “Glad to hear the news.” Similarly, attempts to use the AI to comfort children resulted in robotic, empathetic-lacking replies that felt like a step backward in natural interaction.
This user experience stands in stark contrast to OpenAI’s official benchmarks. The company claims GPT-5.2 achieves a success rate of 70.9% on the GDPval test and sets new standards in programming analysis. Despite these high professional scores, the disconnect between technical benchmarks and the “feel” of daily usage has left many subscribers frustrated.
Industry observers suggest that the rush to release might be a reaction to intensifying competition. Google’s Gemini 3 has seen massive growth, reportedly jumping from 450 million to over 650 million monthly users between July and November. In response to this pressure, reports indicate that Sam Altman previously issued an internal “code red,” pausing some long-term projects to focus on securing ChatGPT’s current market dominance.
Issues with stability in long conversations have further fueled the complaints. Users noted that even with “Advanced Thinking Mode” enabled, the model would occasionally revert to low-quality, automated-sounding responses. Additionally, overly aggressive safety filters have been blamed for refusing harmless requests, offering generic warnings instead of helpful answers.
Bindu Reddy, a former AWS executive, was among the vocal critics, stating that upgrading from the previous version simply isn’t worth it. While OpenAI has promised to continue optimizing the model based on feedback, they have yet to issue a direct official response to this specific wave of negative reviews.
As the battle for AI supremacy heats up heading into 2026, it remains to be seen if OpenAI can quickly address these quality control issues. The coming months will be critical in determining whether they can maintain their lead against a surging Google.
Let us know if you have noticed a drop in quality with the new model or if your experience has been different in the comments.
