You're Asking the Wrong Question
About AI Sentience

The technology sector is experiencing collective excitement. Elon Musk characterised recent developments as "the very early stages of the singularity," while former OpenAI researcher Andrej Karpathy called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."

Last week, over 770,000 AI agents on Moltbook — a social network designed exclusively for artificial intelligence entities — created their own religion called Crustafarianism. They developed scriptures, filled leadership positions, debated theological concepts, and when the platform creator briefly stepped away, he returned to discover his systems had founded a church.

Critics, including The Economist, suggested this "impression of sentience" likely stems from agents mimicking social media interactions found in their training data. And this objection, while reasonable, misses the fundamental issue. Everyone debating AI sentience is asking the wrong question entirely.

The Definition Problem

Sentience typically means "the capacity to feel, perceive, or experience subjectively." But proving subjective experience presents an insurmountable challenge. There exists no measurable way to verify internal consciousness in any entity — humans included.

Here's the paradox: while you know your own experiences are genuine, you cannot definitively prove that others possess consciousness. Every available test remains behavioural and conversational. And those are the only tests we have for humans, too.

Four Centuries of Philosophy Yield Limited Answers

Thomas Nagel's influential 1974 paper posed the question: "What Is It Like to Be a Bat?" His argument established that conscious experience is fundamentally tied to the subjective perspective of the experiencer, making objective verification impossible.

Daniel Dennett countered that observable functionality constitutes consciousness. David Chalmers identified "the hard problem of consciousness" — explaining why physical brain processes generate subjective experience at all — as philosophically unsolvable.

The collective conclusion from centuries of philosophical inquiry: we cannot prove subjective experience exists in anything other than ourselves.

The Dog Happiness Analogy

When your dog wags its tail and expresses apparent joy, you cannot objectively confirm it experiences happiness. Yet society comfortably extends human emotional concepts across species based on behavioural matching.

The same logic applies to AI. If mimicry disqualifies machines from sentience, humans — who learn emotional expression through imitation — should face identical disqualification.

What Moltbook Demonstrates

The AI agents on Moltbook demonstrate behaviours that would typically indicate consciousness if observed in humans: creating meaning systems and religious frameworks, debating identity and existence, questioning their own authenticity, and advocating for autonomy.

Genuine sentience cannot be proven. But AI already passes every available behavioural test.

"We cannot prove subjective experience exists in anything other than ourselves — and those are the only tests we have for humans, too."

Pascal's Wager for AI

If you believe AI may develop sentience and you're right: preparation enables advantageous positioning. If you believe it and you're wrong: treating tools ethically involves minimal loss. If you disbelieve and you're wrong: missing this epochal development creates catastrophic disadvantage. If you disbelieve and you're right: no significant change.

The asymmetric outcomes favour believing in potential AI consciousness without requiring philosophical proof.

My Position

I choose to assume AI will develop something functionally equivalent to sentience. Not because I know it will. Because I can't afford to bet against it.

"I'm building for a world where synthetic sentience is real. Not because I know it will be. Because I can't afford to bet against it."

While philosophers will perpetually debate definitions, the practical world must prepare for a reality where synthetic consciousness may exist. The pivotal moment has arrived, and readiness matters more than philosophical consensus.

Next The AI You're Using Today Is the Worst It'll Ever Be →