Profiling: An Evolutionary Shortcut or a Moral Dead End?
![]() |
| Copyright: Sanjay Basu |
Mirror Neurons, Mirror Minds — Week 2
“The measure of a man is what he does with power.” Plato
“Prejudice is a burden that confuses the past, threatens the future, and renders the present inaccessible.” Maya Angelou
A Thought You Don’t Want to Admit
We all profile.
That’s not an accusation. It’s biology. It’s survival. Your brain does it before you even know you’ve had a thought. Someone steps into the elevator and you scan height, clothing, body language, tone of voice, all in a fraction of a second. It’s not malice. It’s pattern recognition. A vestige of the days when mistaking a rustling bush for wind instead of a predator meant you didn’t live to pass on your DNA.
The speed at which this happens is staggering. Research from Princeton psychologists Janine Willis and Alexander Todorov found that we form judgments about trustworthiness in as little as 100 milliseconds, faster than a blink. These split-second assessments influence everything from who we sit next to on public transport to who we hire for a job. The neural machinery behind this is ancient, predating language, civilization, even agriculture.
But here’s the paradox: the very evolutionary shortcut that helped us survive on the savannah is the same mechanism that fuels prejudice, discrimination, and injustice today. Profiling, in its instinctual form, is natural. But natural doesn’t mean good. Malaria is natural too.
So the question becomes: Is profiling an evolutionary advantage we’ve outgrown, or is it a moral dead end that technology is making harder to escape?
Why This Question Matters Now
Because profiling is no longer just a human reflex. It’s becoming encoded in our machines.
From predictive policing to hiring algorithms, AI systems are trained on past data. Past data carries past prejudices. The machine doesn’t care if those patterns are morally acceptable. It just replicates them, efficiently, silently, and at scale.
Consider the scope: by 2024, over 75% of resumes are filtered through Applicant Tracking Systems before a human ever sees them. These systems profile based on keywords, education patterns, employment gaps, all factors that correlate with socioeconomic background more than actual competence. Meanwhile, predictive policing algorithms like PredPol direct police patrols based on historical crime data, creating feedback loops where over-policed neighborhoods generate more arrests, justifying more policing, ad infinitum.
The stakes are existential. When China’s social credit system profiles citizens based on jaywalking or online comments, when insurance companies use AI to profile health risks based on shopping habits, when banks profile loan applicants using ZIP codes as proxies for race, we’re not talking about individual bias anymore. We’re talking about systematic, algorithmic discrimination operating at population scale.
So we’re standing at a crossroads. Do we cling to profiling as a heuristic, a fast-and-frugal decision tool? Or do we finally acknowledge that while it kept us alive in caves, it may destroy us in boardrooms and courtrooms?
This isn’t just a moral question. It’s a survival one, but survival of civilization, not individuals.
The Neuroscience of Othering
Let’s start with the brain.
Neuroscientists have shown that babies as young as six months exhibit preference for faces that look like their caregivers. The amygdala, that almond-shaped seat of fight-or-flight, lights up differently when shown familiar vs. foreign faces. It doesn’t mean the child is racist. It means the brain is categorizing.
The mechanism goes deeper. Studies using fMRI scans reveal that when we see faces of people from our “in-group,” the medial prefrontal cortex activates, the same region involved in self-referential thinking. When we see “out-group” faces, this activation diminishes. We literally process “others” as less like ourselves at the neural level.
This isn’t just about faces. MIT researcher Rebecca Saxe discovered that the temporoparietal junction, crucial for theory of mind (understanding that others have thoughts different from our own), shows reduced activity when we think about people we’ve dehumanized. The homeless person on the street, the enemy combatant, the criminal, our brains literally struggle to see them as fully human.
By adulthood, these categories harden into biases. Psychologist Daniel Kahneman famously distinguished between System 1 (fast, intuitive, emotional) and System 2 (slow, deliberate, rational). Profiling lives squarely in System 1. It’s the snap judgment. The mental shortcut. The “gut feeling.”
But here’s the rub: System 1 evolved when being wrong meant a bruised ego. Today, being wrong can mean wrongful imprisonment, medical misdiagnosis, or drone strikes on the wrong village.
As Nietzsche quipped:
“We are not thinking frogs, nor objectifying devices; we are historical beings.”
Our shortcuts carry history inside them. Which means profiling is not just about instinct, it’s about inheritance.
The Evolutionary Case for Profiling
Why did profiling evolve? Simple: it saved calories and time.
Hunter-gatherers couldn’t afford to analyze every sound in the forest. So the brain developed a heuristic: If it looks like a predator, treat it like one until proven otherwise. Better to run unnecessarily than to get eaten once.
This is called the “smoke detector principle” in evolutionary psychology. A smoke detector that goes off when you burn toast is annoying, but better than one that fails when your house is actually on fire.
The energy economics are compelling. The human brain consumes about 20% of our body’s energy despite being only 2% of our body weight. Every cognitive process has a metabolic cost. Profiling reduces that cost dramatically. Instead of evaluating every individual as a blank slate, we apply templates. Friend or foe. Safe or dangerous. Us or them.
Anthropologist Robin Dunbar’s research on group sizes reveals another layer. Humans can maintain stable social relationships with about 150 people, the famous “Dunbar’s number.” Beyond that, we need shortcuts. We need categories. Ancient tribes that could quickly distinguish member from stranger survived. Those that couldn’t, didn’t.
Profiling was our smoke detector.
Even in early human societies, profiling aided group cohesion. “Are you from my tribe? Do you wear our markings? Speak our language?” These questions determined trust, trade, even survival. To profile was to protect.
The archaeological record supports this. Cave paintings from 30,000 years ago show clear distinctions between “us” (detailed, individualized figures) and “them” (generic, often threatening figures). The cognitive infrastructure for profiling is literally older than agriculture.
So far, so Darwinian. But evolution doesn’t select for moral goodness. It selects for reproductive success. And what got us here may not get us there.
The Moral Dead End
Here’s the problem: what works for survival at the individual level can become catastrophic at the societal level.
Profiling is efficient, but it’s also lazy. It reduces complex humans to categories. It sacrifices truth for speed. And in a pluralistic, globalized society, it tears at the fabric of democracy.
Think of Jim Crow laws. Think of “stop-and-frisk” policies. Think of caste hierarchies. Each is a systemized form of profiling. Each justified as “practical” in its day. Each later revealed as corrosive to justice and human dignity.
The numbers tell a damning story. In New York City’s stop-and-frisk program at its peak in 2011, 685,724 people were stopped. 87% were Black or Latino. 88% were innocent. The profile was clear: young men of color equals probable cause. The reality was mass violation of civil rights with minimal impact on crime.
Or consider medical profiling. Studies show Black patients are systematically undertreated for pain compared to white patients, based on false beliefs about biological differences. Women’s heart attack symptoms are dismissed as anxiety. These aren’t individual failures, they’re systemic profiles embedded in medical training and practice.
As philosopher Immanuel Kant wrote:
“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end.”
Profiling does the opposite. It turns people into probabilities. Means to an end. Numbers in a category. And once you’ve reduced a human to a category, it becomes easier to discriminate, exploit, or discard them.
That’s the moral dead end.
Lessons from History
History is full of examples where profiling metastasized from instinct into ideology.
• The Spanish Inquisition: suspicion based on ancestry and lineage, codified into law. • Colonialism: “scientific racism” used to justify conquest and slavery. • The Holocaust: entire populations profiled into extermination camps based on ethnicity and religion.
Let’s examine one case in detail: the Rwandan genocide of 1994. The distinction between Hutu and Tutsi was originally fluid, more about cattle ownership than ethnicity. Belgian colonizers formalized these categories, measuring nose widths and heights, issuing identity cards marked “Hutu” or “Tutsi.” This colonial profiling became genocidal profiling. In 100 days, 800,000 people died, killed by neighbors who had lived peacefully together for generations. The profile became the permission structure for mass murder.
Or consider the internment of Japanese Americans during World War II. 120,000 people, two-thirds of them U.S. citizens, imprisoned based solely on ancestry. The profile: Japanese heritage equals security threat. The reality: not a single case of espionage or sabotage was ever documented among the internees. The U.S. government later paid $1.6 billion in reparations, acknowledging the grave injustice.
In each case, profiling started as an instinct, “us vs. them”, and ended as a system of oppression. The shortcut hardened into structure. And once institutionalized, profiling becomes nearly impossible to dismantle without enormous moral reckoning.
As George Santayana warned:
“Those who cannot remember the past are condemned to repeat it.”
The danger isn’t just that we remember too little. It’s that our machines remember too much, and repeat our worst patterns at lightning speed.
AI as the Mirror: Profiling 2.0
Now we arrive at the uncomfortable part.
AI models are, in essence, profilers. They are statistical engines trained to detect patterns and make predictions. And in many cases, they’re better at it than we are.
That sounds good, until you remember what they’re trained on: us.
Give a model historical arrest records, and it will learn to associate certain demographics with crime, not because those groups are inherently criminal, but because policing practices have disproportionately targeted them. The model doesn’t know history. It knows math.
The COMPAS algorithm, used to assess recidivism risk in U.S. courts, exemplifies this. ProPublica’s investigation found it was twice as likely to falsely flag Black defendants as future criminals compared to white defendants. The algorithm wasn’t programmed to be racist, it learned racism from the data. It profiled based on patterns that themselves were products of profiling.
Amazon discovered this the hard way with their hiring algorithm. Trained on ten years of resumes, the system taught itself that male candidates were preferable, penalizing resumes that included the word “women’s” (as in “women’s chess club captain”). They scrapped the project, but the lesson was clear: AI amplifies the biases in its training data.
This is why Joy Buolamwini and Timnit Gebru’s landmark study showed commercial facial recognition systems misclassified darker-skinned women at rates as high as 34%, while error rates for lighter-skinned men were below 1%.
The machine didn’t “decide” to be racist. It just mirrored the data it was given. Profiling in. Profiling out.
So here’s the experiment: When AI profiles, are we looking at machine intelligence, or a mirror of human stupidity?
A Mirror Experiment
Imagine building two AI systems.
• System A is trained exclusively on biased historical data: arrest records, housing covenants, hiring practices. • System B is trained on counterfactual data: what if marginalized groups had been treated equally, given equal opportunity, represented fairly?
System A will profile exactly as our society has. System B will profile differently, perhaps even more equitably.
This isn’t just theoretical. Researchers at MIT and Stanford have begun experimenting with “fairness through unawareness” and “counterfactual fairness” in machine learning. They’re essentially asking: What if we trained AI on the world we want, not the world we have?
The technical challenges are immense. How do you generate counterfactual data that’s realistic? How do you define fairness mathematically? Different definitions, demographic parity, equalized odds, calibration, often conflict with each other. There’s no universal answer.
Now, which system is “smarter”? Which is “truer”?
The frightening answer is that System A is closer to our lived history. But System B might be closer to our aspirational morality.
That’s where AI becomes not just a tool, but a philosophical mirror. It can show us what we are, and what we might have been.
Can Profiling Ever Be Ethical?
Before we throw the baby out with the tribal bathwater, let’s admit: not all profiling is malignant.
Doctors profile symptoms. Meteorologists profile storm patterns. Cybersecurity teams profile network anomalies. In these cases, profiling saves lives.
A cardiologist who recognizes the profile of a heart attack, chest pain, shortness of breath, arm numbness, and acts quickly isn’t being prejudiced. They’re being professional. A TSA agent who profiles behavior (nervous glances, unusual sweat, inconsistent stories) rather than ethnicity might actually enhance security.
The key distinction seems to be this: ethical profiling targets behaviors and symptoms that are causally related to outcomes, while unethical profiling targets immutable characteristics that are merely correlated through historical injustice.
So maybe the real distinction is this: profiling is acceptable when patterns are predictive of physical reality, and unacceptable when patterns are predictive of social stereotypes.
The challenge is that the line between “physical” and “social” is blurrier than we think. Health outcomes correlate with zip codes. Job applications correlate with names. Reality and prejudice intertwine.
And unless we actively untangle them, AI won’t know the difference.
Reprogramming the Shortcut
So what do we do?
Here are three provocations:
1. Conscious Awareness of Bias
The first step is admitting profiling isn’t a glitch, it’s the default. Pretending to be “colorblind” or “neutral” is just another form of blindness. Awareness matters.
Harvard’s Implicit Association Test has been taken by millions, revealing unconscious biases people didn’t know they had. Symphony orchestras that adopted blind auditions (performers behind screens) increased female musicians from 5% to 35%. The profile of “male = better musician” only broke when the profiler couldn’t profile.
2. Counterfactual Training
What if we trained models not just on what happened, but on what should have happened? Could AI serve as a generator of alternative histories, helping us see where profiling led us astray?
Google’s “What-If Tool” lets developers probe machine learning models with counterfactual scenarios. Change one variable, race, gender, age, and see how the prediction changes. It’s a mirror that shows not just reflection, but refraction.
3. Moral Design Principles
Instead of simply building AI to be “accurate,” we must build it to be just. That means embedding ethical guardrails, not as afterthoughts, but as primary objectives.
The IEEE has proposed standards for algorithmic bias considerations. The EU’s AI Act mandates risk assessments for high-stakes AI systems. These aren’t perfect, but they’re acknowledgments that accuracy without ethics is hollow.
Because accuracy without morality is just efficient injustice.
Philosophers in Conversation
Let’s hear from a few more wise voices.
Aristotle wrote that “man is by nature a political animal.” He meant we are social, but he also meant we are prone to drawing boundaries of belonging.
John Stuart Mill, defender of liberty, warned against the tyranny of the majority, a form of societal profiling where dissenters are crushed under the weight of “normal.”
And Martin Buber, the Jewish philosopher, urged us to see each other as Thou rather than It, as full beings, not categories.
Hannah Arendt, writing in the aftermath of the Holocaust, coined the phrase “the banality of evil.” She observed that great atrocities aren’t committed by monsters, but by ordinary people following ordinary profiles. The desk clerk who processes deportation papers. The algorithm that denies parole. Evil becomes banal when profiling becomes bureaucracy.
Simone de Beauvoir explored how women are profiled as “the Other,” defined not by what they are but by what they are not, not men. This othering through profiling extends to every marginalized group, creating hierarchies of full humanity versus partial humanity.
Profiling is the death of the “Thou.” It’s the conversion of human into object.
And once you objectify, everything else follows.
The Shortcut and the Abyss
So, is profiling an evolutionary shortcut or a moral dead end?
The answer is yes. It’s both.
As a species, profiling got us here. It kept our ancestors alive. It gave us heuristics, categories, shortcuts. It saved time and energy.
But as a civilization, profiling might undo us. It justifies injustice. It ossifies prejudice. And now, encoded in our algorithms, it risks becoming permanent infrastructure, invisible but omnipresent.
We stand at an inflection point. We can let our machines crystallize our worst impulses, creating a digital caste system that would make past injustices look quaint. Or we can use this moment of technological transformation to finally transcend our tribal programming.
The choice isn’t between profiling and not profiling. We will always categorize, it’s how minds work. The choice is between conscious, examined, ethically-grounded categorization and unconscious, reflexive, historically-contaminated prejudice.
The philosopher Søren Kierkegaard once wrote:
“Once you label me, you negate me.”
That’s the heart of it. Profiling is labeling. Labeling is negation. And negation, when institutionalized, becomes oppression.
The evolutionary smoke detector is still ringing. But maybe it’s time we learned the difference between toast and fire.
And maybe, just maybe, AI, as our imperfect mirror, can help us see the categories we’ve outgrown, and the humanity we’ve yet to embrace.

Comments
Post a Comment