Who Owns AI Failures?
Redefining Accountability in the Algorithmic Age
Who Do You Sue When the Machine Says No?
If an AI system denies you a loan, misdiagnoses your illness, or gets you arrested, who do you sue?
The developer who built the model?
The company that deployed it?
The regulator who failed to intervene?
The machine itself?
It’s complicated. And it’s getting more so by the day.
As algorithms begin to act with more influence, less transparency, and often no clear chain of command, the question of accountability is no longer theoretical. It’s real. It’s urgent. And, let’s be honest, it’s kind of a mess.
From Science Fiction to Monday Morning
For decades, algorithmic harm was the stuff of dystopian thrillers. Robo-judges handing out death sentences. Credit bots deciding your fate. Machines that couldn’t explain themselves even if they tried.
Fast forward to today, and it’s no longer fiction. It’s happening in courts, banks, hospitals, and HR offices across the world.
In Michigan, an unemployment algorithm falsely accused over 20,000 people of fraud, leading to financial ruin and emotional trauma. In the UK, the government’s automated grading system during the COVID-19 pandemic downgraded thousands of students based on socio-economic proxies. In the U.S., multiple cities have seen facial recognition misidentify Black individuals, leading to wrongful arrests and overnight stays in jail cells.
These aren’t edge cases. These are the red flags waving wildly in our faces.
So back to the question: when an AI system fails, who’s on the hook?
And maybe the better question: why isn’t it clearer?
How Accountability Gets Lost in the Machine
The Great Responsibility Shuffle
One of the more frustrating things about modern AI systems is that nobody seems to be fully in charge. Instead, we get a fascinating dance of blame deflection.
The developer says, “I just wrote the model. I didn’t decide how it would be used.”
The vendor says, “We sold the tech, but it’s up to the client to use it ethically.”
The client says, “We trusted the vendor’s promises. It passed all the benchmarks.”
The regulator says, “We didn’t have the tools or mandate to intervene.”
The lawyer says, “Technically, the machine made the decision.”
Meanwhile, the human harmed is stuck in the middle, clutching a rejection letter, a lawsuit, or a criminal record.
This isn’t just bureaucratic inertia. It’s a structural issue. Unlike traditional tools, AI systems are often probabilistic, dynamic, and shaped by a complex web of actors like developers, data labelers, procurement teams, end-users, auditors, and compliance officers.
In short: accountability is distributed, but consequences are concentrated.
High-Profile Failures and What They Reveal
Let’s get specific.
Facial Recognition and the Arrest of Robert Williams
In 2020, Detroit police wrongfully arrested Robert Williams, a Black man, based on a flawed facial recognition match. The algorithm got it wrong, the detective trusted the result, and the system swept forward like a conveyor belt of bad decisions.
Who’s to blame? The vendor who sold the system? The detective who relied on it without question? The department that adopted the tool without an impact assessment? All of the above?
The real answer? The accountability scaffolding was never built.
Apple Card and Alleged Gender Bias
In 2019, users accused Apple Card’s algorithm of offering significantly lower credit limits to women — even when their financial profiles were stronger than their male counterparts. The companies involved (Apple and Goldman Sachs) denied intentional discrimination, citing the model as a “black box.”
But here’s the thing: if you build a system that can’t be explained, and then let it make consequential decisions, you don’t get to shrug when it discriminates.
COMPAS in Criminal Sentencing
The widely used COMPAS risk assessment tool has been shown to produce biased results, particularly against Black defendants. It’s still used in many jurisdictions.
Why? Because it’s “efficient.” Because challenging it would require rethinking entire court procedures. Because no one actor is clearly responsible for saying, “This shouldn’t be used.”
Each of these cases underscores the same point: Without enforced accountability mechanisms, AI systems become slippery, shadowy proxies for institutional decisions, and people pay the price.
Unexpected Turns: Maybe the Problem Isn’t Just the Tech
Here’s a curveball: maybe the accountability problem isn’t unique to AI. Maybe it’s just a more visible version of something we’ve long tolerated.
Think about bureaucracy in general. It’s built for deflection. Everyone follows procedure. Nobody takes full ownership. We’ve seen this before. In financial collapses, public health failures, and tech breaches. AI just amplifies it because its complexity creates plausible deniability at scale.
In other words: AI didn’t invent the accountability gap. It industrialized it.
That doesn’t mean we let it off the hook. It means we recognize what we’re really dealing with: not just a technical problem, but a governance one. A cultural one. Maybe even a philosophical one.
Who Should Be Accountable? And How?
Let’s get practical. If we’re going to fix this, we need to start assigning responsibility in more concrete ways.
1. Developers and Data Scientists
They can’t claim neutrality. If you build the tool, you’re accountable for anticipating its risks. That includes testing for bias, stress-testing edge cases, and refusing deployment when harm outweighs benefit. Ethics needs to be part of the development lifecycle, not an afterthought.
2. Companies That Deploy AI
Buying a model off the shelf doesn’t absolve you. If your business uses AI to make decisions that affect people’s lives, you’re responsible for understanding how it works, monitoring its outcomes, and having recourse when things go wrong. Procurement teams need red flags, not just feature lists.
3. Regulators and Standards Bodies
They must move faster. Frameworks like the NIST AI Risk Management Framework help by pushing for shared accountability and structured risk assessment. But we also need hard lines: legal requirements, audit mandates, penalties for negligence.
4. Internal Ethics and Compliance Teams
They need power, not just paper authority. If they can’t veto a flawed system, they’re décor.
5. End-Users and Citizens
They deserve transparency, explainability, and an appeals process. If a machine makes a call that affects your rights, you should be able to challenge it and get a human answer.
The Role of Frameworks: NIST AI RMF and ISO 42001
Here’s where the discussion turns from hand-wringing to action.
The NIST AI Risk Management Framework doesn’t solve all our problems, but it gives us a shared language and structure. It tells organizations: you need to Govern, Map, Measure, and Manage the risks across your AI lifecycle. That includes defining roles, logging decisions, setting thresholds for acceptable harm, and documenting how trade-offs were made.
It’s not a checklist. It’s a mindset. One that forces organizations to confront their responsibility rather than outsource it to the algorithm.
Similarly, ISO 42001, the new international standard for AI management systems, forces companies to embed governance into their operations. It’s certifiable, auditable, and pressure-tested. In a world where companies will inevitably make mistakes, these frameworks create the muscle memory to catch them early and own them when they happen.
A Quick Analogy: Who Gets Sued When the Bridge Collapses?
When a bridge collapses, we don’t just shrug and say “the concrete failed.” We investigate. We look at the architect, the materials, the inspection reports, the budget shortcuts.
We ask: Who signed off on this?
AI systems are bridges too. Built with layers of logic, code, assumptions, and power structures. If they collapse, people get hurt. And we need to ask the same question: Who signed off on this?
Closing Reflection: A New Contract for the Algorithmic Age
Ultimately, we need to rethink the social contract around decision-making.
When humans make a bad call, we can hold them to account. They have names, faces, legal standing. But machines? They’re intermediaries. They obscure the decision trail, especially when responsibility is distributed like mist across a supply chain of data, code, and bureaucracy.
To fix this, we must reassert something very old-fashioned: Accountability must follow consequence.
If a system can affect your freedom, your finances, or your future, someone must be accountable for that outcome. Not in the abstract. Not in a vision statement. But in a court of law, in a regulatory hearing, or in front of a review board.
Because without accountability, trust becomes impossible. And without trust, all the accuracy in the world won’t save us from collapse.
So yes, it’s complicated. But that’s no excuse. In the algorithmic age, the question isn’t just “What went wrong?” It’s “Who will own it?”
Let’s make sure we have real answers before the next headline writes itself.
Checklist: Questions Every Organization Should Ask Before Deploying AI That Affects Humans
1. Do we know who is ultimately accountable for this system’s decisions?
If the AI makes a harmful or biased decision, who owns it — and is that person empowered to act?
2. Can we explain how this system reaches its conclusions?
Not just to engineers, but to regulators, impacted users, and the general public. Explainability isn’t a luxury; it’s a responsibility.
3. Have we stress-tested this system for edge cases and unintended consequences?
Who might be disproportionately harmed? Are we testing for failure under real-world conditions, not just lab scenarios?
4. Is our training data representative, up-to-date, and free of embedded bias?
Or are we hardcoding historical discrimination into future decisions?
5. What human-in-the-loop controls do we have in place?
Can a person override or review decisions, especially in high-stakes contexts (health, finance, law enforcement)?
6. Are we following any formal governance frameworks (e.g., ISO 42001, NIST AI RMF)?
If not, how are we managing ethical, operational, and legal risks?
7. How will we monitor and audit this system post-deployment?
AI systems evolve. So should your oversight. What’s your plan for continuous accountability?
8. Do impacted users have a right to appeal or seek redress?
No one should be trapped in a black box. If the AI gets it wrong, how do people fight back?
9. Have we aligned incentives for teams to prioritize responsibility, not just performance?
If ethical behavior delays a launch or costs revenue, will someone be rewarded — or punished?
10. Would we still deploy this system if we were the ones most affected by it?
A gut check, but a necessary one. If the answer is no, it’s not ready.
![]() |
Copyright: Sanjay Basu |
Comments
Post a Comment