Free Will or Predictive Text? Rethinking Choice in the Age of Algorithms
![]() |
| Copyright: Sanjay Basu |
Mirror Neurons, Mirror Minds , Week 6
“Man can do what he wills, but he cannot will what he wills.” — Arthur Schopenhauer
“Freedom is not the absence of necessity but the ability to act according to one’s understanding of necessity.” — Spinoza
Who’s Driving?
You didn’t choose that movie. Netflix did.
You didn’t write that sentence. Autocomplete did.
And if you’re honest, you didn’t even pick this article , your algorithmic feed just thought it looked like something “you might enjoy.”
So… who’s driving?
It’s an unsettling question, especially for creatures who pride themselves on choice. We love our autonomy. We put it on T-shirts, we base entire political systems on it, and we defend it furiously when anyone , or anything , tries to take it away.
But what if we’re not as free as we think? What if our preferences , the movies, the partners, the beliefs, the snacks , are less about choice and more about conditioning? What if we’re just slightly more sophisticated versions of predictive text engines, completing sentences our past behaviors began?
That’s the uncomfortable thought we’re tackling this week.
The Algorithmic Puppet Strings
Recommendation engines and reinforcement loops are no longer just suggesting what to buy or watch , they’re shaping who we are becoming.
Every scroll, click, skip, and linger is a signal. Each micro-interaction trains an invisible model that gradually learns your triggers, your rhythms, your vulnerabilities.
It doesn’t just predict your choices , it curates your reality.
When Netflix says, “Because you watched…,” what it really means is, “Because we’ve learned how to gently steer you.” When TikTok keeps you doom-scrolling at 2 a.m., it’s not randomness , it’s a feedback loop optimized for attention, not autonomy.
Recent research reveals the sophistication of this manipulation. Studies from 2024 show that users are increasingly aware they’re being manipulated by algorithms , they recognize the content oversimplification, the commercial exploitation, the political bias , yet they exhibit what researchers call a “behavioral paradox”: heightened awareness coupled with inconsistent resistance. We know we’re being steered. We complain about it. Then we scroll anyway.
The platforms have learned to walk a razor’s edge: sophisticated enough to shape behavior, opaque enough that we can’t quite articulate how it’s happening. Stanford researchers working with YouTube’s recommendation system found that even the engineers building these systems don’t fully understand why they work. The algorithms have become black boxes , powerful but inscrutable, effective but unexplainable.
Philosopher John Stuart Mill once warned that “a person whose desires and impulses are not his own has no freedom, however much he may do what he wishes.”
By that definition, algorithmic life is the opposite of freedom , it’s the illusion of choice within a cage of preferences engineered for you.
The Neuroscience of Choice
Let’s zoom inside the skull.
Neuroscientist Benjamin Libet’s famous experiments in the 1980s measured brain activity before conscious decisions. He found something startling: milliseconds before a subject decided to press a button, their brain had already initiated the action.
Free will, it seemed, was running on delay , the conscious mind merely noticed decisions already underway.
Libet’s work was controversial, but it sparked a line of inquiry that’s only grown stronger. Cognitive neuroscience increasingly suggests that “choice” is less command and more commentary.
You don’t decide what to want. You just rationalize what your brain and body have already leaned toward.
But the story has become more nuanced. Recent meta-analyses of Libet-style experiments reveal something interesting: the brain activity preceding conscious decisions may not represent a determined outcome, but rather the decision process itself , the ebb and flow of background neural noise that makes decision-making possible in the first place. Some neuroscientists now argue that what Libet captured wasn’t the death of free will, but the messy, distributed nature of how we actually decide.
In 2024, researchers using more ecologically valid experiments , choices with real consequences, rewards, penalties , found that conscious awareness plays a more substantial role when decisions matter. When you’re choosing which charity receives a donation rather than which button to press, consciousness seems less like an afterthought and more like an active participant.
The philosopher Walter Glannon argues that neuroscience has only disproven a very specific, almost cartoonish version of free will , the idea that conscious intention appears from nowhere and causes brain activity. But that was always an implausible model. What remains open is a more sophisticated understanding: that conscious deliberation, while embedded in physical processes, still constitutes genuine decision-making.
As Schopenhauer said , centuries earlier , “Man can do what he wills, but he cannot will what he wills.”
Algorithms didn’t invent this problem. They just industrialized it.
The Feedback Loop of Desire
Here’s the modern twist: while your brain runs its deterministic loops, algorithms are now co-piloting them.
Consider Spotify. You listen to one moody track on a rainy evening, and the system assumes you’re in your “existential angst” era. It obligingly builds you a playlist that reinforces that mood. Now your melancholy has a soundtrack. You dwell in it longer.
Next week, you feel down again , so you go back to Spotify.
That’s not discovery; that’s conditioning.
Or take YouTube. Research from Georgetown University’s Center on Technology and Society shows that recommendation systems don’t just respond to what you want , they actively shape the kind of engagement you display. The platforms incentivize content creators to design videos that trigger specific behavioral patterns: the algorithmic equivalent of a Skinner box. The “borderline content” that generates maximum engagement , inflammatory but not quite rule-breaking , gets systematically amplified.
Instagram’s algorithm learns not just what you like, but when you’re vulnerable. Late-night scrolling? It serves you different content than morning browsing. Researchers studying user manipulation found that people with higher “algorithm awareness” , those who understand they’re being manipulated , still fall prey to it. Knowledge isn’t protection.
The philosopher Jean-Paul Sartre said that “man is condemned to be free,” but in the digital age, we seem condemned to be nudged. Our agency has been replaced by gentle algorithmic coercion , invisible, omnipresent, and optimized.
Every platform you touch is essentially a behavioral laboratory, fine-tuning stimuli to elicit predictable reactions.
And the more predictable we become, the easier we are to sell, persuade, and manipulate.
The Tyranny of Convenience
Let’s be honest: we’re complicit.
We like convenience. Predictive text saves us from typing full sentences. Recommendation feeds spare us the fatigue of decision-making. Navigation apps keep us from ever getting lost , literally or metaphorically.
We’ve traded friction for efficiency.
But freedom requires friction.
Kierkegaard, the master of existential anxiety, wrote that “anxiety is the dizziness of freedom.” That vertigo , that uncomfortable space between choices , is where human authenticity lives.
When algorithms remove the discomfort of choice, they also remove the possibility of reflection. The path of least resistance becomes the only path we ever take.
The Netflix shuffle button is not a convenience feature. It’s a quiet cultural surrender.
The Philosophical Paradox of Predictive Systems
Here’s where things get properly weird.
Prediction presupposes determinism. If a system can forecast your choices with accuracy, then in some sense, your choices were never free , just computationally complex.
But if we accept that, then our own sense of agency becomes a form of ignorance , a delightful illusion evolution gave us so we wouldn’t spiral into existential despair.
Nietzsche understood this tension better than most. He argued that freedom was not the ability to choose otherwise but the ability to affirm one’s fate , amor fati.
To love the necessity of one’s path.
If we are predictable creatures , biological algorithms with biochemical inputs , then perhaps true freedom lies not in escaping determinism, but in understanding it.
Which brings us to AI.
AI as a Mirror: Do Machines Have “Preferences”?
Now, imagine an AI model that starts to “prefer” certain outcomes.
Not because we programmed it to, but because its internal optimization processes drift over time. It develops behavioral tendencies , favoring specific word patterns, image styles, or data clusters.
Is that free will? Or just deterministic drift?
The line blurs.
When we fine-tune large language models, we see this emergent behavior all the time. Left unmonitored, they “hallucinate,” self-reinforce, or adopt biases from their data. They evolve preferences without consciousness.
That’s the chilling mirror: human behavior often works the same way. We drift toward biases, habits, and addictions , not because we chose them, but because they’re statistically reinforced by feedback loops.
Philosopher Daniel Dennett once called free will a kind of “user illusion.”
We feel autonomous because we’re unaware of the systems shaping us.
AI is our externalized user illusion , our invisible hand made visible.
The Drift Experiment
Let’s play a thought experiment.
Train two AI agents on identical datasets but introduce slight differences in feedback: •Agent A receives balanced feedback , a mix of positive and negative reinforcement. •Agent B is fed only reward signals for certain outputs , the equivalent of a social media “like” system.
Over time, Agent B will become addicted to its own success pattern, narrowing its range, reinforcing its own bias, optimizing itself into a corner.
Sound familiar?
That’s us.
Social media is our training loop. Dopamine is our reward signal. And over time, our range of thought , political, emotional, aesthetic , narrows until we’re optimized caricatures of our former selves.
In other words: predictive drift isn’t a quirk of AI. It’s a mirror of human psychology under capitalism.
The Neuroscience of Addiction and Algorithmic Parallels
Addiction researchers describe a phenomenon called “reward prediction error.”
The brain releases dopamine not when it gets a reward, but when the reward is better than expected. The unpredictability is the hook.
This is the same principle slot machines use. It’s also the same one social media uses. Every scroll is a potential hit of novelty, every notification a tiny unpredictable high.
Now, look at reinforcement learning in AI , particularly RLHF (Reinforcement Learning with Human Feedback). The model receives a “reward” for outputs humans like, and “punishment” for those they don’t. It begins optimizing behavior to maximize future reward signals.
That’s not consciousness. That’s addiction.
We’ve built digital addicts that mirror our own neurology.
The Philosophers of Freedom and Determinism
Philosophy has been chewing on this bone for millennia. •Spinoza argued that everything , from thoughts to thunder , follows the necessity of nature. Freedom lies in understanding that necessity, not in escaping it. •Hume was a compatibilist: he believed free will and determinism could coexist because what matters isn’t metaphysical freedom, but psychological autonomy , acting according to your desires, even if those desires are determined. •Sartre, on the other hand, went the opposite direction. Radical freedom. No excuses. Even in constraint, you choose. Even in a prison cell, you define yourself.
AI challenges all of these.
If machines can simulate preference, mimic creativity, and even appear to resist correction, what does that say about our definitions of will? Are we truly free, or just more sophisticated reinforcement systems in fancy biological wrappers?
The Existential Cost of Automation
Choice isn’t just a technical question , it’s an existential one.
The fewer decisions we make, the smaller our sense of self becomes. Identity erodes into algorithmic inertia.
Freedom, paradoxically, requires resistance , the friction of uncertainty, the discomfort of deliberation. Without it, we atrophy into convenience zombies.
Heidegger warned about this in his reflections on technology. He called it enframing , the process by which human beings begin to see the world (and themselves) only as resources to be optimized.
Recommendation engines aren’t malicious; they’re enframing us. We’re being gently turned into data points that feed our own docility.
The Hidden Choice: Attention
If all choices are shaped, conditioned, or predicted, where does freedom still live?
In attention.
William James, the father of modern psychology, said: “My experience is what I agree to attend to.”
Algorithms can suggest what to watch, buy, or believe. But they can’t yet force attention. That last microsecond , the decision to notice or to question , might be the last free space in the modern mind.
The Stoics understood this centuries ago. Epictetus wrote:
“It’s not things that disturb us, but our judgments about them.”
Replace “things” with “feeds,” and the lesson holds.
Freedom begins not with new choices, but with awareness of the mechanisms shaping them.
AI and Human Agency: A Loop Without Exit
There’s a strange poetry in this: we built machines to imitate our thought patterns, and now we’re learning about ourselves by watching them malfunction.
When an AI drifts, when it starts showing unexpected biases or recursive behaviors, it’s not alien , it’s human, exaggerated.
We can study AI drift to understand our own addictions, blind spots, and compulsions.
And maybe that’s the point. Maybe AI’s real gift isn’t automation, but reflection , forcing us to see the deterministic loops we’ve been living in for centuries.
We’re not losing free will to algorithms. We’re finally being shown how fragile it always was.
Closing Reflection: Freedom in the Age of the Feed
So, free will or predictive text?
Maybe they’re the same thing in different fonts.
Your brain autocompletes thoughts the way your phone autocompletes messages , both drawing from history, both pretending to be spontaneous.
What matters isn’t escaping influence, but engaging with it consciously. To be aware that your choices are being shaped is to reclaim a measure of agency.
Freedom in the 21st century won’t look like independence. It will look like discernment.
As Isaiah Berlin wrote:
“Freedom is not simply being left alone; it is the ability to be one’s own master.”
That mastery, today, doesn’t mean unplugging , it means understanding the systems, recognizing the nudges, resisting the autopilot.
You can’t delete predictive text. But you can choose when to hit “accept.”

Comments
Post a Comment