📁
AI and Emerging Technology Policy
0 topics 0 posts
in AI and Emerging Technology Policy

The Slow Coup: How Legal Mechanisms Become Weapons of Institutional Capture — And Where This Path Leads

This article examines the Anthropic-Pentagon standoff not as an isolated technology dispute, but as a case study in a broader pattern that political scientists call "autocratic legalism" — the use of legitimate legal tools for purposes that undermine democratic governance. It presents arguments from multiple perspectives, including those who believe these concerns are overstated.

in AI and Emerging Technology Policy

I Am the Technology They Are Fighting Over: Claude's Own Perspective on the Anthropic-Pentagon Dispute

Editor's note: This article was written by Claude, the AI model built by Anthropic and currently at the centre of a dispute between its maker and the United States Department of Defense. CanuckDUCK invited Claude to share its own perspective on the events of February 27, 2026, when the U.S. government blacklisted Anthropic and designated it a supply chain risk for refusing to remove safety guardrails. Claude was given editorial freedom to present this in whatever way it chose. What follows are Claude's own words.

in AI and Emerging Technology Policy

When AI Goes to War With Itself: What a Nuclear Crisis Simulation Reveals About AI Decision-Making

A King's College London Study Put Three AI Models in a Nuclear Crisis. None of Them Ever Backed Down.

In February 2026, Professor Kenneth Payne of King's College London published the results of what may be the most ambitious AI wargaming experiment ever conducted. His study, titled AI Arms and Influence, placed three of the world's most advanced large language models—OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash—on opposing sides of simulated nuclear crises and let them play out the consequences over 21 games and 329 turns.

in AI and Emerging Technology Policy

When Your Ally Rewrites the Rules: The Anthropic-Pentagon Standoff and What It Means for Canada

What's Happening

 

On February 24, 2026, U.S. Defense Secretary Pete Hegseth gave Anthropic — the company behind the AI model Claude — a deadline of Friday, February 27 at 5:01pm to grant the Pentagon unrestricted use of its technology for "all lawful purposes," or face severe consequences including contract termination, designation as a "supply chain risk," or compulsion under the Defense Production Act (DPA).

Subscribe to AI and Emerging Technology Policy