What If Bias Isn't the Enemy?

 Why the Future of AI Is About Management, Not Purity

Copyright: Sanjay Basu


Bias Can’t Be Deleted

We keep talking about removing bias from AI.

Here’s a radical idea: 

Maybe that’s impossible, and maybe that’s okay.

Yes, bias is real. Yes, it causes harm. And yes, we should take it seriously. But the idea that we can “eliminate” bias entirely, cleanly, surgically, once and for all, might be one of the biggest myths we’re still clinging to in tech.

Why? Because bias isn’t an exception to human systems. It’s baked in. And if we keep pretending it’s something we can delete like a typo in a line of code, we’re not building trustworthy AI. We’re just building dangerous illusions.

Let’s talk about what we can do instead.

From Statistical Fairness to Societal Fractures

Bias has always been part of the algorithmic conversation, but lately, it feels like it’s stepped into the spotlight. And for good reason.

From automated resume filters that discard candidates with “ethnic-sounding” names, to criminal sentencing tools that disproportionately flag Black defendants as high-risk, we’ve seen what happens when systems inherit the sins of the past without any self-awareness.

Read my previous post in the Trustworthy AI series (here)

The cultural response has been predictable. Outrage, then urgency, then… simplicity.

“Fix the data.”

“Debias the model.”

“Ban biased AI.”

These sound good. They feel good. But they also misunderstand the very nature of the problem. Bias isn’t just in the code. It’s in the data we feed it. The labels we assign. The proxies we use. The questions we ask. Even the problems we choose to solve.

Bias is social. Cultural. Historical. You don’t “fix” that with a spreadsheet.

The Myth of Clean AI

Let’s unpack this in layers.

1. Bias Isn’t Just a Flaw. It’s a Feature of Reality

Humans are biased. We stereotype. We generalize. We rely on heuristics and mental shortcuts because the world is too complex to process in full fidelity. That’s not always a failure. Yes you hear me right. Sometimes it’s a survival mechanism.

But when we try to build systems that reflect human decisions at scale, those same shortcuts turn toxic.

The trouble is, bias in human judgment is often invisible. It lives in gut feelings, institutional norms, and quiet hunches. Once we try to formalize it, to encode it into systems, we surface it. Suddenly, the bias is there for all to see. And that makes us uncomfortable.

But instead of facing that discomfort, we rush to purge the system. Scrub the data, tweak the outputs, patch the fairness metrics. And declare the problem solved.

It rarely is.

2. The Fallacy of Fairness Metrics

Engineers love optimization. So naturally, we’ve created dozens of fairness metrics to “measure bias,” namely, equal opportunity, demographic parity, calibration, and so on.

The catch? You can’t satisfy them all at once. They often conflict. You optimize for one kind of fairness and end up violating another. It’s like squeezing a balloon. Push in one place, and it bulges elsewhere.

What’s worse… choosing which metric to optimize isn’t a technical decision. It’s a moral one. Do we prioritize false positives or false negatives? Group-level outcomes or individual fairness? Equity or equality?

There’s no clean answer. And yet we pretend there is.

3. The Mirage of Debiased Data

“Just clean the data,” they say. But data doesn’t exist in a vacuum. It’s a snapshot of human behavior. It is messy, context-laden, and often the result of biased historical policies.

You can filter, mask, and balance until the dataset looks “fair.” But unless you understand where that data came from, how it was labeled, and what’s missing, you’re not removing bias, you’re just hiding it.

It’s like scrubbing a crime scene without knowing what happened. You might remove the blood, but you’re not solving the case.

Maybe We Don’t Want Bias-Free AI

Here’s a spicy take: maybe bias-free AI isn’t even desirable.

Bias, in the technical sense, is about pattern recognition. And patterns , even imperfect ones, can be valuable. If we strip away all variability, all prior knowledge, all context, we’re not left with fairness. We’re left with uselessness.

Imagine a spam filter that refuses to “discriminate.” It treats every message equally, phishing scam or wedding invite. Not very helpful, is it?

Or a hiring tool that ignores all demographic signals, but also ignores experience, education, or industry background, for fear of correlated bias. Now it’s just picking names from a hat.

The goal isn’t to eliminate all bias. It’s to identify which biases are unjust, which are systemic, and which are fixable. It’s to design systems that are aware of their own imperfections, and are built to monitor, audit, and adapt.

The Role of Frameworks. ISO 42001 and NIST AI RMF

So where does that leave us?

Not in despair. But in discipline.

That’s where governance frameworks like ISO 42001 and the NIST AI Risk Management Framework come in. They don’t promise magic. They promise maturity.

ISO 42001

This international standard helps organizations build an AI management system, like ISO 27001 does for information security. It forces teams to document decisions, define responsibilities, and continuously improve their processes.

Most importantly? It doesn’t assume ethics or fairness will emerge naturally. It mandates a system for monitoring them.

NIST AI RMF

This U.S.-born framework breaks AI governance into four clear functions: Govern, Map, Measure, and Manage. In the context of bias, this means:

  • GOVERN: Set policies and roles for addressing bias.
  • MAP: Understand how and where bias might occur in your context.
  • MEASURE: Choose fairness metrics — but acknowledge trade-offs.
  • MANAGE: Monitor impact, adapt to real-world feedback, and escalate failures.

Together, these frameworks offer what ethics pledges never can! Accountability mechanisms.

They don’t erase bias. But they ensure it’s not ignored.

Fire Is Dangerous! But We Still Build Kitchens

Bias is like fire.

Uncontrolled, it’s destructive. It burns. It harms. But properly managed, it cooks our food, heats our homes, and powers our engines.

We don’t outlaw fire. We regulate it. We use fire alarms, smoke detectors, and building codes. We train fire marshals. We hold people accountable when they misuse it.

Bias is the same. You don’t “eliminate” it. You contain it, monitor it, and design systems that live responsibly with its heat.

Toward Honest Systems

The fantasy of bias-free AI is seductive. It gives us a villain to defeat, a finish line to reach, a box to check.

But real progress means letting go of fantasies. We don’t need perfect systems. We need honest ones. Systems that admit their blind spots. That ask hard questions. That invite inspection instead of hiding behind metrics.

Managing bias isn’t failure. It’s maturity. It’s what grown-up technology looks like.

Because the goal isn’t to pretend we’ve solved the oldest problems in human judgment. The goal is to face them. Clearly, collectively, and with enough humility to keep learning as we go.

So next time someone says, “We’ve removed bias from our system,” smile politely. Then ask:

“Or have you just learned how to ignore it better?”


Top Five: 5 Questions to Ask Before Declaring Your AI “Bias-Free”

1. Bias-free according to whom?

Whose definition of fairness are you using? And who was in the room when you made that call?

2. What trade-offs did you make, and did you document them?

Fairness metrics often conflict. Which one did you prioritize? Why? And who might lose as a result?

3. Did your training data reflect reality, or just repeat history?

If the past was biased, and your model learns from the past… you do the math.

4. Have you tested your system in the wild — on real people, at real scale?

Lab “fairness” often dissolves when exposed to real-world messiness. Did you monitor impact post-deployment?

5. Can a non-technical stakeholder understand why your model makes decisions?

If the answer’s “no,” you haven’t achieved fairness — you’ve just obscured the power.

Copyright: Sanjay Basu

This piece is meant to provoke thought and ask questions. I don’t have the solution for how to manipulate inherently biased data to train a relatively fair model. I have ideas, but I prefer to discuss them.

Comments

Popular posts from this blog

Digital Selfhood

Axiomatic Thinking

How MSPs Can Deliver IT-as-a-Service with Better Governance