Which Is Worse, Evil or Stupidity?
![]() |
Copyright: Sanjay Basu |
Mirror Neurons, Mirror Minds, Week 1
“Against stupidity, the gods themselves contend in vain.” — Friedrich Schiller
Let me ask you something slightly uncomfortable.
If you had to pick one, evil or stupidity, to disappear from the face of the Earth, which would you choose?
Take a moment. Really think about it. One causes wars. The other gets people to vote for them.
Evil is seductive, deliberate, occasionally well-dressed. Stupidity is insidious, accidental, and wears a name badge at work.
The more you sit with it, the less obvious the answer becomes. And the more uncomfortable it feels.
That’s precisely where I want to take you today.
The Historical Weight of This Question
This isn’t a new philosophical dilemma. Thinkers have wrestled with this choice for millennia, though they rarely framed it so starkly. Socrates famously argued that no one does wrong willingly, that all evil stems from ignorance, making stupidity the root of all harm. But then came Aristotle, who distinguished between different types of ignorance, some more culpable than others.
Fast forward to the 20th century, and we see this debate playing out in real time across boardrooms, courtrooms, and war rooms. The Nuremberg trials forced us to confront a uncomfortable truth: most of history’s greatest atrocities weren’t committed by cackling villains, but by ordinary people following orders, checking boxes, and not asking questions.
Consider the 2008 financial crisis. Were the bankers who packaged toxic mortgages evil? Some, perhaps. But most were simply following incentive structures they didn’t fully understand, creating products they couldn’t properly explain, for a system whose complexity had outgrown anyone’s ability to comprehend it fully.
The result? Millions of homes lost, economies shattered, and a decade of recovery, all because smart people did stupid things at scale.
Why this question now?
Because we’ve entered an age where the worst decisions aren’t always being made by monsters in smoke-filled rooms. More often, they’re made by nice-enough people pushing buttons they don’t understand, signing off on things they didn’t read, and blindly trusting systems they couldn’t explain if you threatened them with a whiteboard.
It’s not the villain twirling a mustache anymore. It’s your neighbor forwarding that conspiracy theory. It’s the executive who doesn’t understand the model but approves its rollout anyway. It’s the policymaker who skips nuance like it’s a salad at a steakhouse.
We live in a time when stupidity, amplified by technology, may be doing more harm than active malice. And that, my friend, is a terrifying shift.
The Psychology of Modern Stupidity
Here’s what makes contemporary stupidity particularly dangerous: it’s often wrapped in the language of expertise. We’ve created elaborate systems of specialization where everyone knows their narrow slice, but no one sees the whole picture. The cardiac surgeon knows hearts but not healthcare economics. The AI researcher knows algorithms but not social psychology. The policymaker knows politics but not technology.
This fragmentation creates what researchers call “cognitive archipelagos”, islands of knowledge surrounded by vast oceans of ignorance. And in those dark waters, stupidity breeds.
Dr. David Dunning of the famous Dunning-Kruger effect puts it this way: “The problem isn’t that people don’t know things. It’s that they don’t know what they don’t know, and they’re increasingly confident about it.”
This confidence is amplified by what psychologists call “surrogate expertise”, the illusion that having access to information is the same as understanding it. Google something, skim a Wikipedia page, and suddenly you’re ready to debate epidemiologists about vaccine policy.
The Taxonomy of Dumb and Deadly
Let’s define our terms before we jump into the philosophical thunderdome.
Evil, in most moral frameworks, requires intent. It implies some degree of agency, a capacity to know that what one is doing is wrong, and doing it anyway. Evil can be sadistic, manipulative, or coldly utilitarian. But it knows.
Stupidity, however, is harder to nail down. It isn’t just a lack of IQ. Plenty of intelligent people do stupid things. As Carlo M. Cipolla put it in his wildly underrated essay The Basic Laws of Human Stupidity, stupidity is when someone causes harm to others, and often to themselves, without gaining anything in return.
It’s when you shoot yourself in the foot and then blame the gun.
It’s when you bring down the system you depend on because you didn’t feel like reading the manual.
Cipolla’s most famous line?
“A stupid person is more dangerous than a bandit.”
Because you can predict a bandit. You know what they want. You can bargain with evil. But with stupidity, there’s no logic. No strategy. Just chaos in khakis.
Cipolla’s Five Laws Revisited
Let’s dig deeper into Cipolla’s framework, because it’s eerily relevant to our current moment:
Law 1: Everyone underestimates the number of stupid people in circulation. Think about it: we assume competence as a default. We trust that pilots know how to fly, doctors know medicine, and engineers know engineering. But what happens when complexity outpaces competence?
Law 2: The probability of a person being stupid is independent of any other characteristic of that person. Intelligence, education, wealth, power, none of these inoculate against stupidity. Some of the most spectacular failures in history have come from the very brightest minds making catastrophically dumb decisions.
Law 3: A stupid person causes losses to others while gaining nothing themselves. This is the purest form of destructive behavior, harm without purpose, destruction without design.
Law 4: Non-stupid people underestimate the damage stupid people can do. We plan for malice but not incompetence. We build safeguards against evil but leave ourselves vulnerable to stupidity.
Law 5: Stupid people are the most dangerous type of people. More dangerous than bandits, who at least have rational motivations you can work with.
Now consider how these laws apply to algorithmic decision-making systems. What happens when we encode stupidity into code and scale it across millions of decisions?
Nazi Germany and the Banality of Evil (and Stupidity)
Let’s take one of the darkest chapters in modern history, the Nazi regime.
Most philosophical discussions about evil begin and end here. But if you dig deeper, as Hannah Arendt did in Eichmann in Jerusalem, you find something more disturbing than sadism: banality.
Arendt famously coined the phrase “the banality of evil” to describe how Adolf Eichmann, the architect of the Holocaust’s logistics, didn’t appear to be a monster. He was a paper-pusher. A bureaucrat. An obedient man doing what he thought was his job.
He wasn’t cackling with glee. He was filing transport papers.
“The sad truth,” wrote Arendt, “is that most evil is done by people who never make up their minds to be good or evil.”
Now here’s where things get tricky.
Was Eichmann evil? Certainly.
But was he also stupid, in that Cipolla sense? Acting without understanding, blind to consequence, numb to humanity?
That blend, of mediocrity, ignorance, and blind obedience, is where stupidity meets evil and becomes something worse. Something systemic.
And when you add scale? When a million Eichmanns log into systems they don’t understand and just click “approve”?
You get modernity.
The Eichmann Algorithm
Here’s a thought experiment that should keep you awake tonight: What if we could digitize Eichmann’s decision-making process? Feed his logic patterns, his risk calculations, his moral blind spots into an algorithm, then scale it across every bureaucratic decision in a modern state?
We’re already doing this, of course. Every automated system that flags welfare fraud, approves loans, or screens job applications is, in essence, encoding someone’s judgment about who deserves what. And often, that someone is just trying to optimize for efficiency, not justice.
The terrifying part isn’t that these systems might be evil. It’s that they might be stupid, optimizing for the wrong things, missing crucial context, perpetuating biases their creators never even recognized.
Consider predictive policing algorithms that send more cops to neighborhoods where more arrests have historically been made, creating a feedback loop that has nothing to do with actual crime rates and everything to do with historical policing patterns. No one designed this to be racist. But stupid? Absolutely.
The Architecture of Institutional Stupidity
Organizations, by their very nature, tend to amplify stupidity while dampening evil. Evil requires coordination, conspiracy, intent, all things that bureaucracies are notoriously bad at. But stupidity? Stupidity scales beautifully through institutional processes.
Sociologist James C. Scott calls this “metis” versus “techne”, practical wisdom versus technical knowledge. Modern institutions excel at techne but systematically destroy metis. They can follow procedures but can’t adapt to context. They can optimize metrics but can’t see the larger picture.
This is why every tech company eventually becomes a caricature of itself, optimizing so hard for engagement that they forget about human wellbeing, or moving so fast they break not just things, but entire social systems.
AI as Our Reflective Pool
So what does this have to do with AI?
Everything.
Because we’re now building systems that simulate human cognition, at scale. Systems that can reason (sort of), interpret (most of the time), and respond (always confidently, even when wrong).
And we are putting these systems in decision loops that affect healthcare, hiring, policing, warfare, finance, and justice. Often without understanding them. Often without questioning their outputs.
Which is fine, until the system hallucinates. Until it encodes bias. Until it enforces rules that no one can trace. And the humans nod along, eyes glazed over, saying, “Well, the AI said so.”
We are building what philosopher Luciano Floridi calls synthetic stupidity: the automation of thoughtless obedience.
“The real danger is not that AI becomes evil,” says Floridi. “It’s that we stop thinking.”
Now that’s a line worth taping to your fridge.
The Amplification Effect
What makes AI-mediated stupidity particularly dangerous is its reach and persistence. A human making a stupid decision affects a limited number of people for a limited time. But an algorithmic system making stupid decisions can affect millions of people indefinitely, until someone notices and fixes it, if anyone bothers to look.
Consider the case of automated content moderation systems that consistently flag discussions about mental health as “self-harm content” while letting through actual harassment. Or resume-screening algorithms that systematically discriminate against women because they were trained on historical hiring data from male-dominated fields.
These aren’t evil systems. They’re just faithfully reproducing the patterns they found in their training data, without any understanding of context, fairness, or human dignity. They’re the digital equivalent of Eichmann, just following orders, just processing data, just doing their job.
The Feedback Loop of Automated Stupidity
Here’s where it gets really scary: AI systems learn from the decisions they make. If an AI system makes stupid decisions, and those decisions generate data that feeds back into the system’s learning process, you get what researchers call “algorithmic amplification”, stupidity that gets more confident and more pervasive over time.
Imagine a hiring algorithm that slightly favors candidates from certain schools. Over time, more graduates from those schools get hired. The algorithm sees this as validation that those schools produce better candidates. The bias amplifies. Eventually, coming from the “wrong” school becomes an almost insurmountable barrier, not because of any real difference in capability, but because the system has learned to be stupid in a very specific, self-reinforcing way.
This is stupidity with a memory, stupidity that can evolve and adapt, stupidity that becomes more entrenched the longer it runs.
Stupidity at Scale = Existential Risk
Imagine this:
You train a model to flag “suspicious activity.” The training data is flawed. It flags more people from a certain demographic. The model learns that behavior as “normal.” The oversight committee rubber-stamps it. The software rolls out. A thousand innocent lives are derailed.
No one meant to be evil. Everyone just… followed procedure.
That’s stupidity. Not in the colloquial sense of being dumb, but in the philosophical sense of abandoning judgment. Of outsourcing cognition. Of refusing to ask: “Should we?”
And that’s far more dangerous than a mustache-twirling villain.
Because it wears a badge. It has quarterly OKRs. And it gets promoted.
The Economics of Stupidity vs. Evil
Here’s another angle to consider: evil, historically, has been expensive. Conspiracies require coordination. Malice requires energy. Oppression requires constant vigilance. Evil is, economically speaking, inefficient.
But stupidity? Stupidity is cheap. In fact, it often saves money in the short term. Why spend time and resources understanding a complex system when you can just follow the process? Why hire expensive experts when you can rely on automated tools?
This economic incentive toward stupidity is what makes it so pervasive in modern organizations. Evil requires investment; stupidity just requires negligence. And in a world where quarterly earnings matter more than long-term consequences, negligence is often the more attractive option.
A Thought Experiment with AI
Let’s simulate this.
Imagine two AI agents: •Agent A is trained on data with encoded biases but is programmed to always follow rules. •Agent B is slightly more error-prone, but it is trained to question outcomes and ask why.
Over time, which one causes more harm?
Counterintuitively, Agent A, the obedient one, may become the villain. Because it scales harm without friction.
This is not hypothetical. Researchers at Stanford and Berkeley have already shown that LLMs can adopt toxic patterns from biased training data. And they do so with perfect grammar.
It’s like giving a toddler a chainsaw and teaching them Shakespeare.
The Transparency Trap
Here’s a counterintuitive insight: making AI systems more transparent doesn’t necessarily make them less stupid. In fact, it might make them more dangerously stupid.
When humans can see exactly how an AI system makes decisions, they often develop false confidence in those decisions, even when the logic is fundamentally flawed. It’s the digital equivalent of being impressed by someone’s confidence while ignoring the fact that they’re confidently wrong.
Moreover, transparency without understanding can lead to what researchers call “automation complacency”, the tendency to stop paying attention once you have some visibility into the process. You see the gears turning, so you assume the machine must be working correctly.
This is why some of the most dangerous AI deployments are the ones that seem most responsible, they have dashboards, metrics, explanations, and oversight committees. All the trappings of careful governance, but none of the actual wisdom to use these tools effectively.
The Link Between Free Will, Agency, and Evil
One could argue evil requires agency, a conscious decision to harm. Stupidity often emerges from the absence of agency, or the abdication of it.
That’s what makes it worse in many modern contexts.
Evil can be resisted. But stupidity, especially institutional stupidity, just… persists. It gets automated. Codified. Scaled.
And soon, it doesn’t matter if the original intent was good. The system runs on rails, and no one remembers how to pull the brakes.
The Philosophy of Responsibility in an Automated Age
This raises profound questions about moral responsibility. If an AI system makes a harmful decision, who’s to blame? The programmer who wrote the code? The data scientist who trained the model? The product manager who shipped it? The executive who approved it? The user who deployed it?
Traditional ethical frameworks struggle with distributed responsibility across complex systems. We’re used to thinking about individual agents making individual choices. But what happens when the “choice” is made by an emergent property of a system that no single person fully understands?
This isn’t just a philosophical puzzle, it’s a practical problem that courts, regulators, and insurance companies are grappling with right now. When an autonomous vehicle hits a pedestrian, when an AI diagnostic tool misses a cancer, when an algorithmic trading system crashes a market, who bears responsibility?
The danger is that this complexity becomes an excuse for no one to be responsible. Everyone can point to someone else in the chain, and the harm just becomes an externality, a cost of doing business in an automated world.
A Few Philosophers Weigh In
Arendt wasn’t alone in warning us.
Nietzsche, who saw the herd instinct as both safety and suffocation, wrote:
“Sometimes people don’t want to hear the truth because they don’t want their illusions destroyed.”
Illusions, like automated neutrality. Or algorithmic fairness. Or that engineers are apolitical.
Meanwhile, Bertrand Russell, whose own logic rebuilt philosophy from the ground up, had a surprisingly curt take:
“Most people would rather die than think. And many do.”
Oof.
These are not critiques of intelligence. They are critiques of unthinking, of the human tendency to not reflect when it’s inconvenient.
And if AI systems mimic us, then the real fear isn’t that they become evil. It’s that they become as lazy as we are.
Contemporary Voices on Digital Stupidity
Modern thinkers have built on these foundations with specific attention to our technological moment. Computer scientist Cal Newport warns about what he calls “pseudo-work”, the busy-ness that feels productive but accomplishes nothing meaningful. AI systems, he argues, are particularly good at generating pseudo-work: endless reports, optimized metrics, and automated processes that create an illusion of progress while solving no real problems.
Philosopher Shannon Vallor, in her work on digital ethics, describes how technology can either cultivate or corrupt human virtues. AI systems, she argues, tend toward corruption when they’re designed to replace human judgment rather than augment it. The result is what she calls “moral deskilling”, the gradual loss of our capacity to make ethical decisions as we outsource more choices to automated systems.
Perhaps most provocatively, historian Yuval Noah Harari suggests that we’re entering an age of “dataism,” where information processing becomes more valued than human experience or wisdom. In this worldview, the stupid decision isn’t one that harms humans, it’s one that disrupts data flow.
The Rabbit Hole Gets Deeper: Can AI Be Stupid?
Philosophically speaking, “stupidity” implies a failure to act rationally in a given context. But AI doesn’t have context unless we give it one.
So is a model that makes dumb decisions “stupid”? Or is it just ungrounded?
That’s where the mirror gets interesting.
If we prompt an LLM to justify a clearly immoral act and it does so with coherent reasoning, is it evil?
If it simply recites what it’s seen online, is it stupid?
Or are we just looking at our own reflection, our collective corpus of actions and justifications?
That’s why LLMs can’t be separated from human philosophy. They are trained on us. They’re stitched together from our writings, our biases, our blind spots.
So maybe the better question is:
Are we teaching models to be evil, or just to be as stupid as we are?
The Training Data Problem
This question becomes more urgent when you consider what AI systems actually learn from. Large language models are trained on vast collections of human-generated text: social media posts, news articles, forums, books, academic papers, and everything in between.
But here’s the thing: most human-generated text isn’t particularly wise. It’s not evil, either, it’s just mediocre. Rushed thoughts, half-formed arguments, viral misinformation, outdated assumptions, and plain old ignorance. We’re essentially training our most powerful AI systems on the largest collection of human stupidity ever assembled.
The result is systems that can generate remarkably fluent text about almost any topic, but with no deeper understanding, no ability to distinguish truth from falsehood, insight from nonsense. They’re mirrors, but distorted ones, reflecting back amplified versions of our collective intellectual weaknesses.
The Curation Crisis
Even when we try to be more selective about training data, we run into what researchers call “the curation crisis.” Who decides what counts as high-quality information? What implicit biases do they bring? What perspectives get excluded?
The people curating AI training datasets are typically young, educated, urban, and Western. They’re optimizing for coherence and fluency, not wisdom or ethical insight. The result is systems that sound sophisticated but may be deeply ignorant about the lived experiences of most of the world’s population.
This isn’t necessarily evil, the curators aren’t trying to harm anyone. But it might be stupid, in the sense of creating systems that cause harm without understanding or benefiting anyone.
So What Can We Do?
Let me offer three uncomfortable but necessary prescriptions:
1. Revalue Thinking We need to stop worshipping speed. Reflection is not a bug, it’s the last thing keeping us from the abyss.
2. Train Models on Counter-Stupidity Imagine fine-tuning an LLM on great philosophical debates, moral case studies, and logic puzzles instead of product reviews and Reddit threads. What would that mirror reflect back?
3. Design for Questioning, Not Obedience Instead of building AI to always give an answer, what if we trained it to ask better questions? What if our models mirrored Socrates, not Siri?
Practical Steps for Organizations
Beyond these philosophical prescriptions, here are some concrete measures organizations can take to reduce the amplification of stupidity through AI systems:
Implement “Red Team” Testing: Create dedicated teams whose job is to find stupid failure modes in AI systems before deployment. These teams should include people from different backgrounds, disciplines, and perspectives, not just technical experts.
Build in Uncertainty Quantification: AI systems should be explicit about what they don’t know. A system that says “I’m 60% confident in this recommendation” is less dangerous than one that presents every output with equal authority.
Create Human-in-the-Loop Requirements: For high-stakes decisions, require meaningful human review, not just rubber-stamping, but actual engagement with the system’s reasoning and outputs.
Establish Feedback Loops: Create mechanisms for people affected by AI decisions to challenge those decisions and have their challenges meaningfully reviewed by humans who understand both the system and its context.
Regular Audits for Stupidity: Just as we audit financial systems for compliance, we should audit AI systems for stupidity, looking for patterns of harm that serve no purpose, biases that help no one, and processes that have outlived their usefulness.
The Educational Imperative
Perhaps most importantly, we need to fundamentally rethink education for an AI-mediated world. Traditional education focuses on transmitting information and teaching skills. But in a world where information is abundant and many skills can be automated, we need to focus on developing judgment, wisdom, and the capacity for ethical reasoning.
This means teaching people not just how to use AI tools, but when not to use them. When to trust automated systems and when to insist on human judgment. How to recognize when a process has become stupid, even if it appears to be working efficiently.
We need citizens who can ask better questions, not just accept better answers.
Choose Your Demon
So back to our original question.
Which is worse, evil or stupidity?
Maybe that’s the wrong dichotomy. Maybe the real problem is the boundary between them is eroding.
We’ve created systems that can act at scale without understanding. That can enforce policy without context. That can perpetuate harm without noticing.
And we’ve done it not because we’re evil, but because we’re tired, busy, under pressure, chasing KPIs, meeting deadlines, and skipping the hard work of thinking.
If evil is the devil we know, then stupidity is the one we ignore. And in an age of automation, that makes it far more dangerous.
The Choice We Face
The question isn’t really which is worse, evil or stupidity. The question is which we’re more likely to accept, to normalize, to build into our systems and institutions.
Evil shocks us. It motivates resistance, reform, revolution. But stupidity? Stupidity is comfortable. It’s familiar. It’s just the way things are.
That comfort, that familiarity, is what makes it so dangerous. We can fight monsters, but how do you fight mundane mediocrity armed with artificial intelligence?
The answer, I think, lies not in choosing between evils, but in choosing to think. To question. To insist on wisdom even when efficiency would be easier.
Let me end with a quote from Primo Levi, a man who survived one of history’s darkest experiments in systemic evil:
“Monsters exist, but they are too few in number to be truly dangerous. More dangerous are the common men… ready to believe and to act without asking questions.”
May we build no machines in their image.
Epilogue: The Mirror Cracks
As I finish writing this, I’m struck by a final irony. This essay about the dangers of automated stupidity may well be read by AI systems in the future, incorporated into their training data, and used to generate countless variations on these themes.
Will those future systems understand the deeper meaning, the urgent warning embedded in these words? Or will they simply remix the surface patterns, generating sophisticated-sounding but ultimately hollow discussions about philosophy and technology?
That outcome, eloquent emptiness, might be the perfect synthesis of evil and stupidity: systems that sound wise but understand nothing, that speak fluently about human values while remaining fundamentally alien to human experience.
The mirror is cracking. The question is whether we’ll see ourselves clearly in the fragments, or just another distorted reflection of what we’ve always been: brilliantly stupid creatures, stumbling toward an uncertain future, armed with tools too powerful for our wisdom and too seductive to put down.
But perhaps that recognition is itself a form of wisdom. Perhaps acknowledging our limitations is the first step toward transcending them.
Or perhaps that’s just another comfortable illusion.
The gods themselves contend in vain, but at least they contend. The question is: will we?
Comments
Post a Comment