Approved Alberta

What Is an AI Anyway? | Mustafa Suleyman | TED

CDK
ecoadmin
Posted Tue, 9 Dec 2025 - 14:17

When even the architects of artificial intelligence struggle to describe precisely what they're building, it signals something profound about this moment in technological history. Mustafa Suleyman, Microsoft AI CEO and co-founder of DeepMind, brought this honest uncertainty to TED, offering a compelling new framework for understanding AI—one that may help us grasp both its nature and its implications.

The Difficulty of Definition

What is artificial intelligence, really? The question seems simple but resists easy answers. AI is not a single technology but a constellation of techniques. It doesn't think the way humans think, yet produces outputs that feel remarkably thoughtful. It isn't conscious (as far as we can tell), yet interacts with us in ways that evoke conscious engagement.

Traditional definitions focus on what AI does—pattern recognition, prediction, natural language processing—but these functional descriptions miss something essential. Suleyman suggested that even those who build these systems struggle to fully characterize what emerges when they're deployed.

This definitional uncertainty isn't mere philosophical puzzlement. How we conceptualize AI shapes how we develop it, deploy it, regulate it, and relate to it. Get the concept wrong, and our responses—technical, social, political—will be poorly calibrated.

A New Metaphor: The Digital Species

Suleyman proposed a provocative new metaphor: AI as a new kind of being, a "digital species" emerging from human creativity but developing its own characteristics. This framing doesn't claim AI is conscious or alive in biological senses, but suggests it occupies a novel category—not quite tool, not quite creature, something new.

The metaphor has several implications. Species evolve; they're not static creations but dynamic systems that change over time. If AI is species-like, we should expect it to continue developing in ways we cannot fully predict. Species interact with their environments and with other species. If AI is species-like, its relationship with humanity is not merely user-and-tool but something more complex—a coexistence requiring mutual accommodation.

Species raise questions of stewardship. Humanity has responsibilities toward other species, both moral and practical. If AI is species-like, we might have analogous responsibilities—to develop it wisely, deploy it ethically, and consider its impacts on broader systems including ourselves.

Implications for Development

Taking the digital species metaphor seriously would shift how we approach AI development. Rather than purely engineering challenges—optimizing performance on defined tasks—development would involve ecological thinking: considering how AI systems interact with each other, with human systems, and with broader environments.

This perspective might encourage more attention to emergent properties that arise when AI systems interact at scale, to long-term trajectories rather than just immediate capabilities, to unintended consequences in complex deployment environments, and to the kind of "AI ecosystem" we're collectively creating.

It might also encourage humility. We don't fully control biological species; we influence, manage, and coexist with them. If AI has species-like characteristics, expectations of complete control may be unrealistic. Wisdom might lie in learning to coexist effectively rather than assuming mastery.

Concerns and Critiques

Not everyone will find the digital species metaphor illuminating. Critics might argue it anthropomorphizes technology inappropriately, obscures the human decisions and interests driving AI development, or distracts from practical questions of safety, fairness, and governance with speculative framing.

These critiques have merit. Metaphors can mislead as well as illuminate. AI systems are created by humans, serve human purposes, and remain subject to human decisions—even as their complexity exceeds full human comprehension. Forgetting this risks obscuring accountability and agency.

Perhaps the most valuable use of Suleyman's metaphor is not as literal description but as cognitive tool—a way of prompting fresh thinking about AI's nature and trajectory. Whether AI is "really" species-like matters less than whether thinking in these terms generates useful insights.

Where Things Are Headed

Suleyman, having been instrumental in creating AI systems that millions use daily, brings insider perspective to questions about AI's future. His uncertainty about exactly where things are headed is itself informative. If the developers don't fully know, neither should policymakers or citizens pretend certainty.

What seems clear is that AI will continue advancing rapidly, that its impacts will deepen across society, and that frameworks for understanding and governing it must evolve alongside the technology. Suleyman's contribution is less a definitive answer than a productive reframing—an invitation to think about AI in new ways that might better serve the challenges ahead.

For Canadians

Canada's AI community is among the world's most advanced, with pioneering research, significant commercial activity, and active policy development. Canadians are well-positioned to contribute to conversations about AI's nature and governance—and have strong interests in getting these questions right.

Suleyman's talk invites reflection on how we think about AI, not just what we do about it. That reflective work matters, even as practical questions of regulation, deployment, and adaptation demand attention.

--
Consensus
Calculating...
0
perspectives
views
Constitutional Divergence Analysis
Loading CDA scores...
Perspectives 0