AI Unreliability Top Concern Among Users, Reveals Anthropic Study
Based on interviews through December 2025 with over 80,000 users across 159 countries, the findings capture a nuanced snapshot of how people perceive AI—the good, the underwhelming, and its potential.
News
- Starlink Lands in the UAE, But at a Premium
- AI Unreliability Top Concern Among Users, Reveals Anthropic Study
- Why OpenAI Is Unifying Its Tools Into a Superapp
- Microsoft Weighs Legal Action as OpenAI Deepens AWS Partnership
- Global Financial Fraud Reaches $442B, Driven by Autonomous AI Scams
- Encyclopaedia Britannica and Merriam-Webster Take on OpenAI in Landmark AI Copyright Suit
Image Credit- Diksha Mishra/ MIT Sloan Management Review Middle East
A study by AI startup Anthropic found that the biggest concern among Claude.ai users is not the threat of job displacement but rather the technology’s tendency to make errors.
The findings were drawn from interviews conducted through December 2025 using a specialized AI model, Claude Interviewer, which gathered over 80,000 platform users across 159 countries, providing a detailed snapshot of what AI meant to them—the good, the underwhelming, and the possibilities.
It is considered the largest and most multilingual qualitative study ever conducted.
Given the widespread use of AI at work, it was foreseeable that the largest group (18.8%) highlighted Professional Excellence as their top goal. This was followed by Personal Transformation (13.7%), Life Management (13.5%), and Time Freedom (11.1%).
“AI modelled emotional intelligence for me… I could use those behaviours with humans and become a better person,” a respondent from Hungary noted.
For other categories focused on productivity across various aspects of life, the underlying gist wasn’t about doing better work, but about improving their quality of life outside of it.
Notably, under Personal Transformation, 5% noted a romantic connection with AI.
About 81% said that AI had helped them take a step towards their stated vision, with the “productivity” bucket (32%) emerging as the most technical acceleration, largely benefiting IT professionals. This was followed by Cognitive Partnership (17.2%).
However, 18.9% found that AI fell short of their expectations. “I told [AI] it was being too agreeable. It agreed with me,” a user from Italy shared. Several users highlighted the lack of cognitive brainstorming. “The chatbots are too nice — they just answer what I ask. That doesn’t elevate my thinking,” another user shared.
“For me, the knowledge gain is a big-fat zero,” said a student from the USA.
Around 27 per cent of respondents said they were most anxious about mistakes made by AI.
“I had to take photos to convince the AI it was wrong — it felt like talking to a person who wouldn’t admit their mistake,” a user from Brazil noted. Concerns included hallucinations, inaccuracies, fake citations, and a verification burden that defeated the purpose.
Notably, for many respondents, it emerged as a co-concern (with jobs, the economy, autonomy, and agency) rather than their primary worry.
Other listed concerns include cognitive atrophy (16.3%), governance (14.7%), misinformation (13.6%), surveillance & privacy (13.1%), malicious use (13.0%), meaning & creativity (11.7%), overrestriction (11.7%), wellbeing & dependency (11.2%) and sycophancy (10.8%).
Concerns over jobs and the economy emerged as the top predictor of general attitudes toward AI, outranking all other concerns. Globally, 67% of respondents held a net positive view of AI, with people in South America, Africa, and much of Asia showing greater optimism than those in Europe or the U.S.
An upcoming study by Anthropic will explore how Claude AI impacts users’ well-being over time—examining how the platform is improving lives today and where it can do so more effectively.


