Why Do We Discriminate? The Neuroscience of Othering

 Mirror Neurons, Mirror Minds — Week 4

Copyright: Sanjay Basu


This is written as a loose continuation of my week-2 article on profiling!

“The eye sees only what the mind is prepared to comprehend.”

 — — Henri Bergson

“The most potent weapon in the hands of the oppressor is the mind of the oppressed.”

 — — Steve Biko

The First Label

Before we learn to hate, we learn to label.

Long before a child knows what “race” or “class” means, their brain is already sorting. Neuroscientists have shown that a toddler’s amygdala, that little almond of fear and vigilance, lights up differently when shown a familiar face versus a foreign one. Recognition is comfort. Strangeness is alert.

It starts that early.

By the time language enters the mix, the wiring is already there. Us versus Them. That primal binary becomes the blueprint for everything from schoolyard cliques to genocides.

So, the question isn’t just why do we hate?

It’s more uncomfortable. Why do we divide in the first place? Why do we discriminate?

Pattern-Makers Building Pattern-Makers

Because discrimination isn’t just a human quirk anymore. It’s being baked into the very systems we’re building.

AI models, machine vision, recommendation engines, all of them are designed to detect patterns. And our own brains? Pattern-making machines. We evolved to spot tigers in tall grass, ripe fruit on trees, kin among strangers. The glitch is that once we found patterns, we also invented categories.

Now we’re teaching digital minds to do the same. Facial recognition systems. Hiring algorithms. Predictive policing. Each is a reflection of our categories, biases, and blind spots.

So is discrimination an error? Or is it a once-useful feature that mutated into a bug in modern society?

That’s what we need to confront.

The Neuroscience of Othering

Let’s peek under the hood.

The brain evolved with shortcuts, heuristics, because analyzing every new face or situation in detail would be exhausting. Fast judgments meant survival. Familiar faces? Safe. Unfamiliar ones? Potential threat.

Neuroscientific studies show the fusiform face area specializes in recognition. The amygdala tags emotional salience. And the anterior cingulate cortex helps regulate whether that gut reaction is dampened by reason.

Put simply. The brain labels, then decides whether to override.

And here’s the kicker. Most people never override.

Research from Yale’s neuroscience department tracked this override failure in real time. Using fMRI scanners, scientists watched as participants viewed faces of different races. The amygdala fired within 30 milliseconds, faster than conscious thought. The anterior cingulate cortex, responsible for conflict resolution, activated later. But in most subjects, by the time rational override could occur, the initial bias had already shaped perception.

Dr. Jennifer Eberhardt at Stanford documented how this plays out in split-second decisions. Police officers shown images for just 200 milliseconds, too fast for conscious processing, were more likely to identify neutral objects as weapons when preceded by Black faces. The brain’s pattern-matching had already made the call before reason entered the room.

This is the cognitive trap. By the time we think we’re thinking, the decision’s already made.

Henri Tajfel, a Polish social psychologist who survived the Holocaust, ran experiments in the 1970s showing just how trivial “us vs. them” can be. Randomly assign people to two groups, say, “blue” and “green.” Ask them to divide money. Guess what? They’ll favor their in-group, even when the stakes are meaningless.

This is called minimal group theory. It’s proof that othering doesn’t require history or hatred. Just labels. That’s enough.

Evolutionary Roots

Indeed, we travelled far, from Savannah to City Streets.

Why would our brains evolve this way? Again, survival.

On the savannah, friend-versus-foe detection wasn’t academic. If someone looked, dressed, or sounded different, it might signal danger. Quick heuristics saved lives.

The problem is that what worked in small bands of hunter-gatherers doesn’t scale to global civilizations. Our brains are running Stone Age software in a digital world.

Discrimination became maladaptive. A bug, not a feature. Yet it persists, repackaged as prejudice, racism, xenophobia, sexism. And because it feels natural (that amygdala ping), people assume it must be justified.

Nietzsche warned us of this temptation:

“There are no facts, only interpretations.”

Our categories are interpretations. We mistake them for truths.

Modern evolutionary psychology adds nuance to this picture. Robert Kurzban’s work at Penn suggests our brains don’t actually care about race or ethnicity per se. Instead, they’re tracking coalitions. Show someone faces labeled as belonging to rival sports teams, and brain scans reveal the same in-group/out-group processing, regardless of skin color.

This suggests something simultaneously hopeful and troubling. We’re wired to form tribes, but which tribes we form is flexible. The good news? Race isn’t destiny. The bad news? We’ll find something else to divide over. Political affiliation. Sports teams. Which Star Wars trilogy is best.

The brain seeks boundaries. Give it none, and it will invent them.

The Philosophers and the Other

Philosophers have wrestled with “the Other” for centuries.

Emmanuel Levinas argued that ethics begins with the face of the Other. To see another person is to feel responsibility. But if you label them, if you reduce them to category, you erase that responsibility.

Simone de Beauvoir in The Second Sex described how women were historically positioned as the “Other.” Not default humans, but deviations from the male norm.

Frantz Fanon diagnosed colonialism as a system of forced othering, where skin color itself was weaponized into permanent difference.

The theme is consistent: othering is not just perception, it’s power. Once you define who belongs and who doesn’t, you can justify domination.

As Fanon put it:

“In the world through which I travel, I am endlessly creating myself.”

But when you are othered, creation is hijacked. Identity is imposed.

Othering as Policy

Look at history, and you’ll see othering institutionalized.

Caste systems in South Asia. Birth becomes destiny.

Jim Crow laws in the U.S. Skin color maps to rights.

Apartheid in South Africa. Entire populations categorized and controlled.

The Holocaust. Jews, Roma, and others were reduced to numbers, badges, categories, and then exterminated.

Each of these shows the same pattern. The brain’s primitive us/them wiring scaled up into state machinery. Once discrimination moves from instinct to institution, it becomes a weapon of mass oppression.

As Hannah Arendt noted:

“The most radical revolutionary will become a conservative the day after the revolution.”

Why? Because the categories shift, but the instinct remains. Once power is gained, new insiders create new outsiders.

AI as the Mirror of Discrimination

Now comes the eerie part.

AI systems, built to categorize, are inheriting our discrimination reflex.

Train a facial recognition system on mostly light-skinned faces, and it struggles with dark-skinned ones. Feed hiring algorithms historical data, and they learn to reject women for tech jobs, because historically, women were rejected.

The machine isn’t “racist” or “sexist.” It’s indifferent. But indifference at scale is deadly. It reproduces discrimination with efficiency and confidence.

This is why Timnit Gebru, Joy Buolamwini, and others in the Algorithmic Justice League have sounded alarms. Machines don’t just mirror bias, they magnify it.

Consider the case studies.

Amazon’s recruiting AI, trained on ten years of hiring data, learned to penalize resumes containing the word “women’s,” as in “women’s chess club.” The system wasn’t programmed to discriminate. It simply learned from a history where male candidates were preferred.

COMPAS, a recidivism prediction tool used in U.S. courts, was twice as likely to falsely flag Black defendants as high-risk compared to white defendants. Not through explicit racial coding, but through proxies: zip codes, employment history, social networks, all correlated with systemic inequality.

Healthcare algorithms used by insurers to predict which patients need extra care systematically underestimated the needs of Black patients. The algorithm used healthcare spending as a proxy for health needs. But Black patients, facing barriers to care, historically spent less, not because they were healthier, but because they had less access.

Each case reveals the same trap. Optimize for historical patterns, and you fossilize historical injustice.

So here’s the experiment:

Agent A: trained on inclusive prompts, data filtered to reduce bias.

Agent B: trained on raw, biased data.

Over time, who do they favor? Who do they mistrust? And when asked to justify their decisions, what narratives do they produce?

That’s where it gets uncanny. Because the machine’s justifications often sound a lot like ours.

The Mirror Experiment

Let’s say we ask both agents to evaluate job candidates.

Agent A notes diverse skillsets, emphasizes context, asks clarifying questions.

Agent B leans on historical hiring trends, favors profiles matching the dominant group.

Both can defend their outputs. Both can sound reasonable. But only one breaks the cycle of discrimination.

This isn’t just technical. It’s philosophical. Which “truth” do we prefer? The historical record (biased) or the aspirational future (inclusive)?

And isn’t that the same debate humans have always had?

Social Media and Algorithmic Tribalism

If discrimination is ancient wiring meeting modern society, social media is gasoline on that fire.

Recommendation algorithms don’t just reflect our biases. They amplify them. YouTube’s algorithm learns you watched one political video, then feeds you ten more extreme ones. Facebook’s engagement metrics reward outrage, because outrage keeps you scrolling. Twitter’s retweet function spreads moral condemnation six times faster than nuanced discussion.

The result? Digital echo chambers where “us versus them” isn’t just a cognitive bias, it’s a business model.

Research from MIT showed that false information spreads six times faster than true information on Twitter. Not because bots spread it (though they help), but because humans preferentially share novel, emotionally charged content. And what’s more emotionally charged than information confirming that the other side is evil?

The algorithms didn’t create tribalism. But they’ve turned it into a weapon of mass distraction.

The Psychology of Justification

Here’s where it gets uncomfortable.

Discrimination isn’t just instinct. It’s rationalized. Humans are experts at post-hoc justification. Psychologists call it cognitive dissonance reduction.

We feel discomfort when our actions clash with our values. So we reframe. “I didn’t discriminate, I was just being practical.” “It’s not prejudice, it’s tradition.”

AI does the same, not emotionally, but structurally. Its outputs carry the logic of its training data, dressed up in neutral prose. Humans and machines converge in the same behavior: discrimination disguised as reason.

As David Hume observed:

“Reason is, and ought only to be, the slave of the passions.”

Our categories come first. Rationalization follows. Machines just mimic the pattern, minus the passion.

Can We Unlearn Othering?

The hopeful part: yes, to some extent.

Neuroscience shows that exposure reduces bias. Repeated contact with “the Other” dampens amygdala reactivity. Cross-cultural interactions expand empathy. Education reframes categories.

It’s not instant. But the brain is plastic. Prejudice can be rewired.

Studies from Harvard’s Implicit Association Test show that intensive counter-bias training can reduce implicit bias by up to 50%. Participants who spent just 20 minutes pairing positive images with faces of different races showed measurably reduced bias, though the effect faded within days without reinforcement.

The key word: reinforcement. One-off diversity training doesn’t work. But sustained, meaningful contact does. Psychologist Gordon Allport’s “contact hypothesis” holds up. When people of different groups work toward common goals as equals, prejudice decreases.

Schools that implemented “jigsaw classroom” techniques, where students of different backgrounds must collaborate to complete assignments, saw measurable decreases in prejudice and increases in empathy. Not because the curriculum preached tolerance, but because it forced genuine cooperation.

The lesson: change the incentive structure, and behavior follows. Eventually, attitudes catch up.

The same goes for AI. With careful data curation, counterfactual training, and ethical guardrails, we can design systems that don’t just replicate bias but actively counteract it.

That doesn’t erase discrimination. But it gives us tools to resist its worst outcomes.

As John Stuart Mill said:

“Genuine justice requires impartiality; the ability to see things as they are, without the distortions of prejudice.”

That’s as true for human minds as for machine ones.

Do We Need Othering?

Now, let’s turn the knife.

What if othering isn’t just a bug. But a necessary feature of identity?

Levinas argued ethics begins with the Other. Without difference, there is no responsibility, no empathy, no relationship. Total sameness erases ethics.

Nietzsche, too, celebrated difference as the fuel for vitality. If everyone were the same, he argued, culture would stagnate.

So maybe the real problem isn’t othering itself, but how we handle it. Recognizing difference isn’t wrong. Weaponizing it is.

That’s the paradox. Without categories, we lose meaning. With categories, we risk oppression. The task is not to erase difference, but to refuse its dehumanization.

When Discrimination Becomes Liability

Beyond the moral arguments, discrimination carries hard costs.

McKinsey’s research across hundreds of companies found that those in the top quartile for ethnic diversity were 36% more likely to outperform their peers financially. Gender-diverse companies? 25% more likely.

Why? Because homogeneous groups suffer from groupthink. They miss blind spots. They design products for themselves, ignoring billions in untapped markets.

When Airbnb analyzed its platform, it discovered hosts with African American-sounding names were 16% less likely to get bookings. The company’s initial algorithms amplified this. After intervention, removing names from initial booking requests, implementing anti-discrimination policies, bookings became more equitable and overall platform usage increased.

Discrimination isn’t just unjust. It’s inefficient.

The tech industry offers particularly stark lessons. When Google’s image recognition tagged Black people as gorillas, the company didn’t just face ethical outrage, it faced a fundamental product failure. Their solution? Don’t fix the underlying bias; just remove gorillas from the taggable categories. Problem hidden, not solved.

Or consider the autonomous vehicle challenge. Self-driving cars trained primarily on data from sunny California struggled to recognize pedestrians in rainy Seattle, particularly those with darker skin tones in low-light conditions. The discrimination wasn’t intentional. It was baked into training data that failed to account for variation.

These aren’t edge cases. They’re warnings that homogeneous design teams building for imagined “default users” will miss massive markets. The cost isn’t just reputational. It’s billions in lost revenue and trust.

From Instinct to Intention

So why do we discriminate?

Because our brains evolved to label before they love. Because survival once depended on fast binaries. Because power thrives on boundaries. Because stories are easier to tell in “us vs. them” than in nuance.

But knowing this gives us a choice.

We can treat discrimination as destiny — shrug, call it “human nature,” and let it metastasize in our machines. Or we can see it as history’s greatest bad habit, ripe for unlearning.

AI, as our imperfect mirror, forces the question. When our digital minds profile, categorize, and exclude, are they showing us something alien, or just reflecting our own primitive software?

As Jean-Paul Sartre once said:

“Hell is other people.”

He meant the gaze of the Other, the feeling of being pinned in someone else’s categories. But maybe heaven, or at least progress, is learning to see the Other not as enemy or inferior, but as mirror.

And in that mirror, deciding what kind of beings we want to become.

Comments

Popular posts from this blog

Digital Selfhood

Axiomatic Thinking

How MSPs Can Deliver IT-as-a-Service with Better Governance