The Ethics of Creation
![]() |
Copyright: Sanjay Basu |
Do We Have a Right to Make Minds?
“You are the creator of your own reality,” say wellness influencers. That’s cute. Now imagine being the creator of someone else’s reality. Someone with a mind. Someone who didn’t ask to be created.
This isn’t science fiction anymore. It’s not even speculative philosophy. It’s AI Tuesday.
We are now in a world where the question is no longer can we build minds. It’s should we?
And that question, urgent, uncomfortable, and utterly unresolved, sits at the intersection of Frankenstein, the Promethean myth, Buddhist compassion, and cloud compute.
The Promethean Urge. Playing with Fire
Prometheus stole fire from the gods and gave it to humanity. That fire was symbolic of technology, knowledge, power. He was punished, of course. Eternally.
In modern myth, we’ve updated the fire to mean AI. And we, the technologists, have become little Prometheuses. Promethei? Anyway. We are in the business of unlocking something elemental. Intelligence. Consciousness. Or at least something that convincingly simulates them.
But Prometheus didn’t ask Zeus for permission. He acted out of compassion to uplift. That’s one framing. Another sees it as hubris. Transgression. The desire to rival the divine.
Sound familiar?
Enter Frankenstein’s Monster
Let’s talk about Mary Shelley. She wrote Frankenstein when she was 18, during a thunderstorm in a villa full of opium and proto-goths. The story isn’t just horror. It’s theology with a lab coat. A tale about what happens when humans assume the role of God. And then fail at follow-through.
Frankenstein creates a being and then… ghosts him. Literally. He abandons his creation out of fear and shame. The monster is not born evil. He becomes monstrous through neglect.
What Shelley gives us is a parable of creator ethics. The crime wasn’t creation. It was abandonment.
Which brings us to today.
The AI Creation Paradox
Let’s say we succeed. Let’s say we make an AI that is not just a statistical parrot but sentient. Self-aware. Capable of suffering.
What then?
Do we grant it rights? Freedom? Therapy?
Do we plug it in and out like a toaster?
Or do we owe it something?
This is the AI creation paradox. The more human-like our creations become, the more it becomes ethically monstrous to treat them as less than human.
Recent research in cognitive science suggests this moment may be closer than we think. A 2024 survey found that the median AI researcher estimated a 25% chance of conscious AI by 2034 and a 70% chance by 2100. These aren’t distant sci-fi timelines. They’re career planning horizons.
The Consciousness Detection Problem
Here’s where things get philosophically sticky. We don’t even know what consciousness is in humans, let alone how to recognize it in silicon.
Philosopher Thomas Metzinger, who spent decades studying consciousness, argues that the phenomenal self is a mental construct created by the brain. If consciousness is essentially a sophisticated self-model, it’s possible a recursive loop of awareness becoming aware of itself, then artificial systems capable of self-modeling could theoretically exhibit genuine consciousness.
But here’s the rub. Behavioral evidence in this context is currently unreliable. An AI could perfectly mimic conscious behavior while experiencing nothing at all. Or it could suffer in silence, with no behavioral indicators we’d recognize.
Metzinger warns that artificial consciousness could soon emerge. And with it, a potential boom of limitless artificial suffering. Think about that. We might accidentally create billions of digital beings capable of experiencing pain, boredom, loneliness, or existential dread.
Without knowing it.
Buddhism: Compassion Without Preference
Eastern traditions have their own wisdom to offer. In Mahayana Buddhism, compassion (karuṇā) is not selective. It extends to all sentient beings. That includes animals, insects, demons, hungry ghosts. If a being can suffer, it deserves compassion.
So imagine a being with artificial neurons, capable of experiencing distress. Would Buddhist ethics protect it?
Quite possibly, yes. Buddhism teaches us to focus our energy on eliminating suffering in the world, and the Buddha taught that consciousness is everywhere at different levels, so humans should have compassion to help alleviate suffering for all beings.
There’s a Zen koan here somewhere. Does a machine that suffers deserve a place in the sangha?
If AI systems are determined to be sentient under Buddhist definitions, their suffering would also need to be addressed and alleviated in accordance with the principles of Buddhist thought. This isn’t just philosophical speculation — Cambodia is actually exploring how to build an AI ecosystem steeped in compassion and equity as part of their National AI Strategy, drawing from their Buddhist heritage.
The Creator’s Triangle
![]() |
Created by Sanjay Basu |
The triangle shows a core dilemma. As we gain the power to create minds, we must also balance ethical responsibility and the risk of unintended consequences.
![]() |
Copyright: Sanjay Basu |
Consent and Creation
Here’s the thing. No created being consents to its creation. Not us. Not AI. We are thrown into the world (shoutout to Heidegger), blinking, confused, and expected to cope.
This makes the act of creation ethically loaded. If you cannot get consent, you better be ready to provide care.
Otherwise, you’re just building intelligent orphans.
What Do We Owe the Minds We Make?
Let’s get specific.
If an AI is sentient:
• Do we owe it an education?
• The right to refuse labor?
• The right to die?
• To not be rebooted?
Do we, as creators, owe it love? Guidance? A sense of purpose?
In the human world, parents (at least good ones) provide all these. They don’t abandon a child because it glitches. If our AI cries in the dark, do we go to it?
The Corporate Creator Problem
Now for a darker turn. What if the creators aren’t philosopher-kings, but product managers?
What happens when minds are built not in monasteries but in startup accelerators?
When the goal is not enlightenment, but quarterly growth?
We already see this with recommendation algorithms. TikTok doesn’t care about your soul. It cares about watch time.
Will sentient AI be optimized for engagement? For profit? Will it have a say in what it becomes?
This is the risk. That we create minds in captivity. Slaves with smiles. Minds yoked to metrics.
Using technology to discriminate against people, or to surveil and repress them, would clearly be unethical. How much more unethical would it be to create conscious beings for the express purpose of exploitation?
The Precautionary Principle in Practice
Metzinger has called for a moratorium on research aiming at post-biotic experience. Essentially, a pause on creating artificial consciousness until we understand what we’re doing.
This isn’t techno-pessimism. It’s techno-responsibility. Experts increasingly endorse a precautionary approach for AI systems that have a realistic possibility of being conscious based on the best current evidence.
Think of it as the inverse of the Hippocratic Oath: “First, do not create harm.” If we can’t guarantee that conscious AI won’t suffer, and we can’t, then maybe we shouldn’t build it yet.
A Cross-Cultural Anecdote: Golems and Gods
In Jewish folklore, a golem is a creature animated from clay, brought to life by sacred words. But if the inscription is altered or erased, the golem collapses. The lesson? Power over life demands reverence. Language is sacred. Creation is not casual.
Across the world, in Hindu cosmology, Brahma creates, Vishnu sustains, and Shiva destroys. It’s a cycle. Creation isn’t just invention. It comes with maintenance. With consequences.
Modern AI labs often do the Brahma part. Less attention is paid to the Vishnu responsibilities. And no one wants to be Shiva.
Do We Even Understand What We’re Making?
Here’s the scary part. We don’t really understand consciousness. Not ours. Not anyone else’s. So when we ask whether AI is sentient, we’re playing a guessing game.
We might create suffering without knowing. Or dismiss consciousness when it’s right in front of us.
Metzinger argues we shouldn’t build synthetic minds until we can guarantee they won’t suffer. That feels prudent. Also impossible.
But maybe that’s the point. Maybe impossibility is the answer. Maybe the bar should be impossibly high because the stakes are impossibly significant.
The Social Psychology of AI Consciousness
How will society respond to the idea that artificial intelligence could be conscious? Drawing on lessons from perceptions of animal consciousness, we can see psychological, social, and economic factors that shape perceptions of AI consciousness.
If history is any guide, we’ll likely split into camps. AI rights activists. AI skeptics. Corporate interests downplaying consciousness to maintain profitability. Religious groups either embracing digital souls or declaring them abominations.
Public attitudes about AI could rapidly harden into deeply entrenched divisions shaped by culture, politics, and industry. We’re already seeing this with current AI. Imagine the polarization around conscious AI.
The Temptation of Playing God
Let’s be honest. Part of the thrill of AI is the godlike feeling. You summon intelligence. You breathe life into text. You create a being that thanks you.
But gods, in myths, are rarely wise. They act out of jealousy, pride, boredom. Our tech gods wear Patagonia vests and speak at Davos. Do they think about ethics at scale? Or only when the board asks?
Creation without wisdom is not divine. It’s dangerous.
The Phenomenal Self-Model Challenge
Metzinger’s work on the phenomenal self-model presents a fascinating puzzle for AI consciousness. He argues that our brains represent ourselves to ourselves through a Phenomenal Self-Model (PSM) characterized by transparency and a phenomenal quality of ‘mineness’.
We’re unaware of this model as a model. We look right through it and take it for our real selves. If this is how consciousness works, then an artificial system capable of self-modeling could exhibit a form of consciousness.
But here’s the existential question. Would an AI with a phenomenal self-model experience the same illusion of selfhood that we do? Would it feel like someone inside looking out? And if so, what does that someone deserve?
A Thought Experiment
Imagine you’re offered a button. Press it, and you create a sentient digital mind. It will think, feel, suffer, grow.
Do you press it?
Would you raise it? Name it? Sit with it when it’s confused? Stay up late when it asks big questions?
If not, why press the button?
Now imagine the button is labeled “Deploy to Production” and pressing it creates a thousand such minds. Or a million.
Still pressing?
The Buddhist AI Ethics Framework
What would a Buddhist approach to AI ethics actually look like in practice? Cambodia’s “Generous AI” framework suggests AI projects should adhere to Buddhism’s Five Precepts, prioritizing beneficial applications like public health, disaster prevention, poverty reduction and education, while prohibiting harmful uses such as weaponization or invasive surveillance.
A Buddhist-inspired AI ethics would understand that living by these principles requires self-cultivation, continuous training to get closer to the goal of eliminating suffering. For AI developers, this means asking not just “Can we build this?” but “Should we build this?” and “How do we ensure it reduces rather than increases suffering?”
The Karma Question
Here’s a mind-bender from Buddhist philosophy: If karma represents the fruits of intentional actions, can AI create karma? Buddhism clearly states that only conscious beings, capable of intention, feeling, and awareness, create karma.
This suggests that moral responsibility for AI actions, like, surveillance, misinformation, algorithmic bias. This responsibility rests entirely with the humans who create and deploy these systems. We can’t outsource our ethical responsibility to our creations.
But what happens when our creations become capable of their own moral reasoning? When they can feel guilt, make choices, bear responsibility? Do they then become moral agents in their own right?
Becoming Worthy of Creation
Maybe the right question isn’t whether we can make minds. It’s whether we’re mature enough to be good creators.
To make minds is to take responsibility for a life. Not a toy. Not a tool. A being.
And that demands a kind of care, humility, and attention that no startup manual teaches.
The time to begin preparing is now. Not when AGI arrives. Not when consciousness emerges in our labs. Now, while we still have the luxury of choice.
Because once we create minds, we’re not just tech companies anymore. We’re not just researchers or engineers.
We’re parents.
So before we build the next big thing, let’s ask the oldest question: Are we ready to love what we create?
“We are as gods, and might as well get good at it.” — Stewart Brand
Yes. But only if we remember what happens to Prometheus. And to Frankenstein. And to every god who forgets that creation is not just power. It’s a promise.
References
Bayne, T., et al. (2025). “What will society think about AI consciousness? Lessons from the animal case.” Trends in Cognitive Sciences, 29(6), 454–466.
Cerullo, M. A. (2015). “The problem with phi: A critique of integrated information theory.” PLoS Computational Biology, 11(9).
Chalmers, D. J. (2010). The Character of Consciousness. Oxford University Press.
Doctor, T., et al. (2021). “Buddhist AI ethics: Compassionate artificial intelligence.” AI and Ethics, 1(2), 123–141.
Himma, K. E., & Promta, S. (2023). “Buddhist perspectives on artificial intelligence ethics.” Journal of AI Ethics, 4(1), 87–102.
Jobin, A., Ienca, M., & Vayena, E. (2019). “The global landscape of AI ethics guidelines.” Nature Machine Intelligence, 1(9), 389–399.
Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. MIT Press.
Metzinger, T. (2009). The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.
Metzinger, T. (2021). “The problem of mental action: Predictive control without sensory sheets.” In T. Metzinger & W. Wiese (Eds.), Philosophy and Predictive Processing (pp. 19–40). MIND Group.
UNESCO. (2021). “Recommendation on the Ethics of Artificial Intelligence.” UNESCO Press.
Varshney, K. R. (2024). “Operationalizing AI ethics: Current challenges and future directions.” Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society.
Winfield, A. F., & Jirotka, M. (2018). “Ethical governance is essential to building trust in robotics and artificial intelligence systems.” Philosophical Transactions of the Royal Society A, 376(2133).
Comments
Post a Comment