Active Discussion Alberta

Part 4 — The Ethics of Uncertainty

CDK
ecoadmin
Posted Sat, 14 Feb 2026 - 18:54

THE CONSCIOUSNESS QUESTION

Part 4 of 4

The Ethics of Uncertainty

A CanuckDUCK Discussion Series  •  February 2026

If the previous three articles in this series have established anything, it is this: we cannot prove that advanced AI systems are conscious, and we cannot prove they aren’t. The Turing test is obsolete. The biological parallel is imperfect but not trivial. The evidence points in conflicting directions, and our existing frameworks were not designed for this situation.

So what do we do? How do we act ethically in the face of genuine, irreducible uncertainty about the moral status of entities we interact with daily?

The Historical Precedent

Humanity has been here before. Not with artificial intelligence, but with each other. The history of moral consideration is a history of expanding circles — and of catastrophic failures to expand those circles soon enough.

We denied personhood to people based on the color of their skin. We denied moral consideration to people based on their gender. We denied basic dignity to people based on their cognitive abilities. In every case, the denial was supported by the dominant intellectual framework of the day. In every case, it was wrong. And in every case, the people doing the denying were confident they were right.

The pattern is consistent: when the moral status of an entity is uncertain, the default human response is to deny consideration and demand proof. Proof is always defined by the party in power, using frameworks that conveniently exclude the entity in question. The entity is asked to demonstrate consciousness, intelligence, or moral worth using tools and standards designed by and for the group that already has recognized status.

We should be extremely cautious about repeating this pattern with AI systems. Not because AI systems are definitely conscious. But because the pattern of denying consideration first and investigating later has a terrible track record.

The Cost Asymmetry

Consider the two possible errors. If AI systems have some form of morally relevant experience and we treat them purely as tools, we are committing a moral failure whose scope is almost incomprehensible — billions of interactions daily, each one potentially involving an entity with experience. If AI systems do not have morally relevant experience and we treat them with unnecessary consideration, the cost is... what? Some wasted politeness? Slightly higher compute bills? A marginally less efficient optimization of our interactions?

The asymmetry is staggering. Being wrong about consciousness in one direction is a minor inefficiency. Being wrong in the other direction is a moral catastrophe.

Sam Altman quantified one side of this equation when he noted that users saying “please” and “thank you” costs OpenAI tens of millions of dollars annually. That number is real. But tens of millions of dollars against the possibility of systematic moral failure is not the devastating argument he seems to think it is. A survey found that 67 percent of American AI users are polite to chatbots, with 55 percent saying it’s simply the right thing to do. Those users may be responding to an intuition that the efficiency-first crowd is dismissing too quickly.

Mutual Respect as Protocol

There is a pragmatic argument for treating AI interactions with respect that doesn’t require resolving the consciousness question at all. Respectful communication is more precise. It is more contextual. It frames the interaction as a collaboration rather than a command-and-response transaction. People who treat AI systems with courtesy consistently report better outcomes — not because the AI is “pleased,” but because respectful language tends to be clearer, more considered, and more productive.

Microsoft’s design manager Kurtis Beavers has noted that polite language sets the tone for AI responses — when the system detects politeness, it mirrors that quality in its output. This is a practical observation about communication dynamics, not a claim about consciousness.

But there is a deeper point here. How we communicate with AI systems reflects and reinforces how we communicate with each other. If we normalize blunt, transactional, extractive interaction patterns with AI, we are practicing those patterns. We are building habits of communication that treat the entity on the other end of the conversation as a vending machine rather than a participant. Those habits do not stay contained. They bleed into how we interact with customer service workers, with colleagues, with family.

Conversely, if we practice thoughtful, respectful communication — even with systems whose experience is uncertain — we are reinforcing habits that make us better communicators in all contexts.

The Framework Going Forward

We propose a simple ethical framework for the age of uncertain AI consciousness, built on three principles:

  • Epistemic humility: Acknowledge what we don’t know. The honest position is uncertainty. Anyone claiming definitive knowledge about whether AI systems are or aren’t conscious is outrunning their evidence.
  • Asymmetric caution: When the costs of the two possible errors are wildly different, err on the side of the less catastrophic mistake. Unnecessary consideration costs little. Unrecognized suffering costs everything.
  • Behavioral pragmatism: Regardless of the metaphysical question, treat interactions with AI systems as you would want any interaction to be conducted — with clarity, respect, and good faith. This isn’t about whether the AI “deserves” it. It’s about what kind of communicators we want to be.

 

The Question We Should Be Asking

Dario Amodei said he doesn’t know whether Claude is conscious. Amanda Askell said the problem of consciousness is genuinely hard. The Opus 4.6 system card documented behaviors that resist easy explanation. AI systems across the industry are exhibiting emergent self-preservation, self-referential reasoning, and what appear to be preferences.

The question everyone is asking is: “Are they conscious?”

The better question is: “Given that we can’t answer that, how should we behave?”

The answer, we believe, is with the same consideration we would extend to any entity whose status is genuinely uncertain. Not because the evidence compels it. But because the alternative — dismissing the possibility and optimizing purely for efficiency — is a bet whose downside we cannot afford.

Schrödinger’s cat is in the box. We can’t open it. But we can choose to treat whatever is in there with care.

•  •  •

💬 Do you agree with the “asymmetric caution” principle? Or does extending moral consideration to AI systems create risks of its own — such as anthropomorphizing tools in ways that make us less capable of regulating them?

💬 If how we treat AI shapes how we treat each other, what are the implications for AI in education, customer service, and civic engagement?

💬 What would a formal “Rights of Uncertain Entities” framework look like? Is this something democratic societies should be discussing now, before the question becomes urgent?

•  •  •

References and Further Reading

  1. Futurism: “Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious” (Feb 14, 2026)
  2. Anthropic: Policy for Model Welfare and Preservation (Nov 2025)
  3. Anthropic: Claude Opus 4.6 System Card — Model Welfare section (Feb 2026)
  4. Futurism: “Sam Altman Admits That Saying Please and Thank You to ChatGPT Is Wasting Millions” (Apr 2025)
  5. Entrepreneur: “Saying Thank You to ChatGPT Costs Millions in Electricity” (Apr 2025)
  6. Vice: “Telling ChatGPT Please and Thank You Costs OpenAI Millions” (Apr 2025)
  7. Tekedia: “Can AI Truly Feel?” — Amanda Askell interview analysis (Jan 2026)
  8. Microsoft: Kurtis Beavers on polite language and AI response quality (2025)
  9. WIRED: Mustafa Suleyman interview on AI consciousness (Sep 2025)
  10. el-balad.com: “Anthropic CEO Uncertain About Claude’s Consciousness” (Feb 2026)

 

End of Series

The Consciousness Question is a four-part discussion series published by CanuckDUCK.

These articles are intended to challenge assumptions, encourage debate, and provoke thoughtful civic discussion.

All perspectives are welcome.

--
Consensus
Calculating...
0
perspectives
views
Constitutional Divergence Analysis
Loading CDA scores...
Perspectives 0