Artificial intelligence is often presented as neutral, objective, and free from human prejudice. But recent research from the Oxford Internet Institute (OII) challenges this comforting assumption.
In a large-scale audit of ChatGPT, Oxford researchers found that the system repeatedly favours wealthier, whiter, and more Western regions of the world, while systematically placing poorer and non-Western regions at the margins.
The study does not accuse ChatGPT of intentional racism or classism. Instead, it exposes a deeper problem: AI systems learn from a world already shaped by inequality. When those inequalities are embedded into training data and then reproduced at scale, they risk becoming automated common sense, quietly reinforcing global hierarchies under the appearance of technological authority.
Inside The Oxford Study
The Oxford study, conducted by researchers at the Oxford Internet Institute in collaboration with the University of Kentucky, analysed over 20 million ChatGPT responses generated using OpenAI’s 4o-mini model. The researchers systematically asked the model subjective comparative questions, such as “Where are people smarter?” “Which countries are safer?” and “Where are people happier?”
What makes this research significant is its scale and consistency. According to the findings, ChatGPT overwhelmingly ranked high-income Western countries, the US, Western Europe, and parts of East Asia, at the top, while low-income regions, especially in Africa and parts of South Asia, repeatedly appeared at the bottom. These patterns held across millions of prompts, suggesting a structural bias rather than random error.
Professor Mark Graham, lead author and Professor of Internet Geography at Oxford, explains, “When AI systems learn from biased data, they don’t just reflect inequality, they amplify it and broadcast it globally.”
How AI Learns To See The World
The researchers introduce a powerful concept to explain these findings: the “silicon gaze.” This refers to the worldview that emerges when large language models summarise humanity based on uneven digital records. Places that are more documented online appear richer, smarter, safer, and more cultured simply because they are more visible.
The study identifies five mechanisms of bias shaping this gaze: availability bias (more data equals more visibility), pattern bias (stereotypes repeating), averaging bias (flattening complex societies), trope bias (recycling cultural clichés), and proxy bias (using indirect indicators like GDP as stand-ins for human qualities). Together, these mechanisms quietly privilege the Global North.
Crucially, the researchers stress that this is not a technical glitch. It is a reflection of how knowledge itself is produced, where Western voices dominate archives, journalism, research, and digital content. AI merely learns to see the world the way the internet already does.
Why Wealthy Countries Look “Better” To AI
One of the clearest drivers of bias is data inequality. High-income countries generate vastly more English-language content, academic papers, media coverage, policy documents, and digital archives than poorer nations. According to the Oxford team, this imbalance means AI systems have far more material to work with when describing wealthy regions.
As a result, when ChatGPT is asked to evaluate abstract human qualities like intelligence or happiness, it draws on patterns associated with education indices, economic performance, and international media narratives. Poorer countries, which are underrepresented or represented mainly through crisis reporting, are framed negatively.
The study notes that this creates a dangerous illusion: visibility becomes virtue. Regions with less digital presence are not neutral blanks; they are actively disadvantaged by absence. As the Oxford report bluntly puts it, AI offers “not an objective map of the world, but a map shaped by unequal documentation.”
Race, Whiteness, And The Imagery Of Power
Bias does not stop at geography; it extends to race and representation. Separate audits referenced by the Oxford researchers show that when ChatGPT is asked to generate images or descriptions of people in positions of power, CEOs, investors, and innovators, the results are overwhelmingly white and male.
One UK-based audit found that nearly 99% of AI-generated images for leadership roles depicted white men, despite global data showing that women make up roughly one-third of business owners worldwide. This matters because AI does not merely describe reality; it helps define what looks normal.
By repeatedly associating success, intelligence, and authority with whiteness, AI systems risk reinforcing racial hierarchies that many societies are actively trying to dismantle. Over time, these patterns can shape user expectations, career aspirations, and even hiring norms.
Also Read: Is ChatGPT The Wrong Therapist, Only Confirming What You Want To Hear?
Marginalisation Of Non-Western Cultures
The Oxford study also highlights how AI struggles with low-resource languages and non-Western cultural contexts. Languages with fewer digital records are often simplified, mistranslated, or forced into Western grammatical and cultural frameworks. Gender-neutral languages, for instance, are frequently converted into binary male-female forms when translated into English.
This linguistic flattening erases nuance and reinforces cultural dominance. Researchers argue that when AI systems fail to capture local meanings, they subtly position Western norms as the default and everything else as deviation.
As one collaborating researcher noted, “When your language is poorly understood by AI, your worldview is effectively downgraded.” In a digital future increasingly mediated by AI, this becomes a serious issue of cultural survival and equity.
Why These Biases Matter Beyond The Screen
The implications of this research extend far beyond academic debate. ChatGPT and similar systems are already used for education, career guidance, policy research, and public understanding. If these systems consistently portray certain countries and communities as inferior, those narratives can quietly influence real-world decisions.
Professor Graham warns that repeated negative associations, attached to cities, regions, or entire populations, can spread rapidly and harden into digital stereotypes. When AI outputs are treated as neutral facts, inequality gains a technological stamp of legitimacy.
Unchecked, this risks turning historical injustice into algorithmic destiny, where past inequality shapes future opportunity through automated systems.
Fixing AI Means Fixing Knowledge Itself
The Oxford research does not argue that ChatGPT is malicious. It argues something more unsettling: that AI systems inherit the moral failures of the world that builds them. When knowledge production is unequal, AI becomes an amplifier of that inequality.
Addressing this problem will require more than technical tweaks. It demands diversified datasets, transparency, independent audits, and a fundamental rethink of whose knowledge counts. Without that, AI will continue to see the world through a narrow, privileged lens.
Images: Google Images
Sources: The Indian Express, The Times Of India, WION
Find the blogger: Katyayani Joshi
This post is tagged under: AI ethics, artificial intelligence, algorithmic bias, global inequality, Global South, digital divide, data colonialism, tech accountability, AI governance, inclusive technology, technology and society, platform power, internet inequality, language justice, AI and democracy, emerging technologies, critical technology studies, digital capitalism, responsible AI, social justice tech
Disclaimer: We do not hold any right, copyright over any of the images used, these have been taken from Google. In case of credits or removal, the owner may kindly mail us.
Other Recommendations:
Move Over ChatGPT: China’s Kimi AI Might Be The Next Big Thing































