Concern Grows as ChatGPT References Musk’s Grokipedia in Factual Answers
What has unsettled researchers is not merely Grokipedia’s AI-generated right learning content, but how it is now appearing beyond the Musk ecosystem.
News
- Concern Grows as ChatGPT References Musk’s Grokipedia in Factual Answers
- TII and Qualcomm Bet on Edge AI to Move Autonomous Systems Out of the Lab
- Musk Predicts AI Will Outpace Human Intelligence by 2026
- UAE Companies Double Down on AI, but Gaps Persist: Report
- AI Robotics is a ‘Once-in-a-Generation’ Opportunity for Europe, Says Jensen Huang
- WEF and TII Announce Abu Dhabi Centre for Frontier R&D and Policy
[Image source: Chetan Jha/MITSMR Middle East]
Recent tests conducted by The Guardian show that GPT-5.2, the latest language model by OpenAI powering ChatGPT, has begun citing Grokipedia, an AI-generated encyclopedia developed by Elon Musk’s company xAI, as a source of factual information.
Grokipedia was launched in October as an alternative to Wikipedia, following Musk’s claims that existing encyclopedic platforms exhibit liberal bias. Unlike Wikipedia, Grokipedia is written and edited entirely by AI systems, with no direct human editing. Since its debut, researchers and journalists have flagged the platform for advancing right-leaning interpretations on issues such as same-sex marriage, the Capitol attack, and transgender rights, as well as for reproducing debunked or ideologically charged claims.
What has unsettled researchers is not merely Grokipedia’s content, but how it is now appearing beyond the Musk ecosystem. In trials involving just over a dozen queries, The Guardian found that ChatGPT cited Grokipedia nine times. These citations did not occur in response to widely scrutinised topics such as the HIV/AIDS epidemic or the January 6 Capitol insurrection. Instead, they surfaced in answers to more technical or obscure questions, including those about Iran’s political institutions, the Basij paramilitary force, and the ownership structures of influential foundations. In one case, the model echoed stronger allegations about links between the Iranian state and telecom operator MTN-Irancell than those found in Wikipedia.
More troublingly, Grokipedia was cited in responses about Sir Richard Evans, a historian who testified against Holocaust denier David Irving, repeating claims that have previously been discredited. Similar patterns have reportedly appeared in the outputs of Anthropic’s chatbot, Claude.
An OpenAI spokesperson said the system draws from a broad range of publicly available sources and applies safety filters to reduce harmful content. But disinformation experts argue that the greater risk lies in the transfer of credibility. When AI systems cite a source, users may assume it has been vetted. As generative models increasingly function as default knowledge intermediaries, even marginal or low-credibility sources can quietly gain legitimacy—making misinformation harder to trace, challenge, or remove.
In that sense, the Grokipedia episode is less about one encyclopedia and more about a structural vulnerability in how AI systems decide what counts as knowledge. As these systems continue to increasingly become our default knowledge source, the evidence from The Guardian shows how fragile that role can be.


