Active Discussion

The Slow Coup: How Legal Mechanisms Become Weapons of Institutional Capture — And Where This Path Leads

CDK
ecoadmin
Posted Sat, 28 Feb 2026 - 06:43

This article examines the Anthropic-Pentagon standoff not as an isolated technology dispute, but as a case study in a broader pattern that political scientists call "autocratic legalism" — the use of legitimate legal tools for purposes that undermine democratic governance. It presents arguments from multiple perspectives, including those who believe these concerns are overstated.

What We Watched Happen This Week

On February 27, 2026, the President of the United States ordered every federal agency to stop using products from Anthropic, a leading American AI company. The Secretary of Defense designated the company a "supply chain risk to national security" — a classification historically reserved for adversarial foreign entities like Huawei and Kaspersky Lab. The company's offence was maintaining two contractual provisions: that its technology not be used for mass domestic surveillance of Americans, and that it not power weapons systems that select and kill targets without human involvement.

Hours later, competitor OpenAI signed a deal with the same Pentagon, reportedly preserving the same two safeguards Anthropic had been punished for requesting.

That sequence of events deserves careful examination — not just for what it says about AI policy, but for what it reveals about the trajectory of governance in the United States, and what that trajectory means for Canada.

The Pattern: When Legal Tools Become Coercive Instruments

Political scientists have a term for what happens when elected leaders use legitimate legal mechanisms for purposes those mechanisms were never designed to serve: autocratic legalism. The concept, developed by scholars including Kim Lane Scheppele at Princeton and Scott Cummings at UCLA, describes how democracies can erode not through dramatic military coups, but through the incremental weaponization of existing laws, designations, and executive powers.

The key insight of this research is that the tools themselves are legal. What changes is their purpose. A supply chain risk designation exists to protect national security from genuine threats — foreign adversaries embedding vulnerabilities in critical infrastructure. The Defense Production Act exists to mobilize industrial capacity during emergencies. Executive orders directing agency procurement exist to ensure government efficiency.

None of these tools were designed to punish a domestic company for including safety provisions in a government contract. But each of them was either used or threatened against Anthropic for exactly that purpose this week.

This matters because the formal legality of each individual action creates what scholars call an "appearance of institutional normalcy" — each step can be defended on its own technical terms, even as the cumulative pattern serves a fundamentally different objective than any single tool was designed to achieve.

A Pattern of Precedents

The Anthropic case did not emerge in isolation. Consider the sequence of actions by the current U.S. administration that involve using legal mechanisms of governance for purposes of institutional pressure:

Federal funding leverage against universities: Institutions have faced threats to billions in federal research funding for positions the administration considers ideologically misaligned — not for failure to meet research obligations, but for policy disagreements. Harvard alone faced the freezing of roughly $9 billion in grants and contracts.

Executive orders restructuring the civil service: The "Schedule F" reclassification of federal employees converts career civil servants — whose independence from political pressure is a cornerstone of democratic governance — into at-will political appointees.

Defense contractor compliance expansion: A January 2026 executive order on "Prioritizing the Warfighter in Defense Contracting" granted the Secretary of Defense authority to cap executive salaries, restrict stock buybacks, and recommend cessation of foreign military sales advocacy for companies deemed underperforming — a significant expansion of discretionary punitive power over the private sector.

The Defense Production Act as threat: The DPA was invoked or threatened for minerals, herbicides, nuclear fuel management, and now AI. Legal experts note that using the DPA's Title I compulsion power to force a company to remove safety features from its own product would be, as multiple scholars have stated, "without precedent under the history of the DPA." The law has never been used to compel a company to produce a product it has deemed unsafe.

Renaming the Department of Defense: The rebranding to "Department of War" — while symbolically charged — represents a deliberate rhetorical reframing that aligns military institutions with offensive rather than defensive posture, normalizing language that would have been considered provocative in prior administrations.

Each of these actions has its own defenders and its own legal basis. The question is not whether any individual action is legal in isolation. The question is what the pattern means when taken together.

The Counter-Arguments Deserve Serious Treatment

It would be intellectually dishonest — and contrary to what this platform exists to foster — to present only the concerns. The administration's defenders make several arguments that merit engagement.

Democratic mandate

The president was elected. Elections have consequences. The voters chose a leader who campaigned explicitly on disrupting institutional norms he characterized as failing the American people. From this perspective, using available legal tools aggressively is not subversion of democracy — it is democracy in action. The institutions being disrupted, in this view, are themselves undemocratic: unelected bureaucrats, corporate gatekeepers, and regulatory bodies that exercise enormous power without direct electoral accountability.

This argument has genuine force. There is a legitimate democratic tension between electoral mandates and institutional constraints. The question is where the line falls between vigorous executive action within democratic bounds and the use of executive power to dismantle the constraints that make future democratic correction possible.

National security exceptionalism

In matters of national security, the argument goes, speed and decisiveness matter more than procedural niceties. The Pentagon needs tools that work without corporate veto power. When a company sells a product to the military, the military — accountable to elected civilian leadership — should determine how it's used. Private companies imposing conditions on national defense operations is itself a form of undemocratic governance.

This argument is not trivial. Democratic accountability does run through elected civilian leadership of the military, and there is something uncomfortable about corporate terms of service constraining national defense. The complication arises when the specific capabilities being demanded — mass data collection on citizens, autonomous lethal systems — are precisely the capabilities that democratic accountability mechanisms are supposed to prevent.

Institutional overreach correction

Many Americans — and many Canadians — have legitimate frustrations with institutional capture by unaccountable actors. Regulatory agencies that serve the industries they're supposed to regulate. Technology companies that exercise enormous power over public discourse without democratic oversight. Academic institutions that receive public funding while operating as ideological monocultures. The populist impulse to disrupt these institutions is not inherently authoritarian — it can reflect genuine democratic frustration with systems that have become unresponsive to the public.

The challenge is distinguishing between reforming institutions to make them more accountable and dismantling the institutional architecture that enables democratic self-correction. The former strengthens democracy. The latter, regardless of intent, weakens it.

Where This Path Leads: Three Scenarios

Rather than predict a single future, it is more useful to outline the trajectories that the current pattern could follow.

Scenario 1: Institutional Resilience and Correction

In this scenario, the pattern reaches its limits. Courts intervene — as they have in some cases already. Anthropic's legal challenge to the supply chain risk designation succeeds. Congressional oversight reasserts itself, potentially through the 2026 midterm elections. The AI industry collectively holds its red lines, making the coercive strategy unworkable. International allies, including Canada, articulate independent standards that create diplomatic friction with U.S. overreach.

Historical precedent supports this scenario in some respects. Harvard's Steven Levitsky, co-author of How Democracies Die, has argued that American democratic backsliding "can be reversed — and I think likely will be reversed," pointing to the resilience of decentralized elections and the vibrancy of civil society.

The risk: this scenario depends on institutions actually exercising their checking functions. The Century Foundation's Democracy Meter rated U.S. democracy at 57/100 in 2025, a 28 percent decline in a single year, driven substantially by institutional capitulation rather than institutional resistance. The Harvard researchers noted a pattern of "forced capitulation of a lot of autonomous civil society" — institutions that privately disagreed but publicly complied, accelerating the very dynamic they feared.

Scenario 2: Accelerating Consolidation

In this scenario, the Anthropic precedent becomes a template. Having demonstrated that a supply chain risk designation can be used against a domestic company for contractual disagreements, the administration applies similar pressure to other technology companies, research institutions, and private sector actors. The Defense Production Act becomes a routine tool of commercial coercion. Companies learn that principled resistance carries existential risk, and self-censor accordingly.

The AI industry is particularly vulnerable to this dynamic. OpenAI accepted the Pentagon's terms within hours. The petition signed by 430 employees across Google and OpenAI represented solidarity in principle, but their companies' leadership moved in the opposite direction. If OpenAI — burning $25 billion in 2026 with no path to profitability until 2030 — receives the same ultimatum Anthropic received, its financial structure makes resistance functionally impossible. The company that can't afford to say no becomes the preferred partner of an administration that punishes those who do.

This creates a selection effect: the AI companies that survive and thrive in this environment are the ones most willing to comply with whatever is demanded, while those with the strongest safety commitments are economically marginalized. The market rewards compliance and punishes principle — not through ideology, but through the structural mathematics of who can afford to resist.

The Carnegie Endowment's comparative analysis notes that in other backsliding democracies, the private sector's capitulation to government pressure was often the critical accelerant. When Brazil's business community signed an open letter defending democratic rule in 2022, it helped create a coalition that defeated Bolsonaro. When Hungary's business community accommodated Orbán, it helped entrench his regime.

Scenario 3: Normalization Without Resolution

Perhaps the most likely — and most insidious — scenario is neither dramatic correction nor dramatic consolidation, but normalization. The Anthropic case becomes one news cycle among many. The legal challenge proceeds through courts for years. The supply chain risk designation remains technically in place but unevenly enforced. Other companies quietly adjust their terms to avoid similar confrontation. The precedent exists but is rarely invoked — its power lies in the threat, not the execution.

This is the scenario that comparative political science suggests is most common and most dangerous. Harvard's Erica Chenoweth identified an alarming pattern: "In a democracy, if you win a lawsuit, you win — that settles the conflict. In a country that isn't a democracy, when you win a lawsuit, you still lose if it's against the government, because they find other ways to bully or to inflict pain."

In this scenario, the formal legal structures of democracy remain intact. Elections happen. Courts issue rulings. Companies have the theoretical right to resist. But the practical cost of exercising that right has been raised so high that few do. The democracy is formally functional and practically constrained. The coup is complete not because the institutions have been abolished, but because they have been made too expensive to use.

Why This Matters for Canada — Specifically

Canada is not the United States. But Canada exists in an integrated security, economic, and technological relationship with the United States that makes American institutional dynamics directly consequential.

Defence integration

NORAD and NATO interoperability means that AI systems deployed by the U.S. military operate in joint command structures where Canadian personnel serve. If U.S. procurement decisions are driven by political compliance rather than capability and safety assessment, Canadian forces inherit those compromised systems without having participated in the decision. The Anthropic case demonstrates that the U.S. may now select AI vendors based on willingness to comply with unrestricted access rather than on safety evaluation.

Economic exposure

The supply chain risk designation doesn't just affect Anthropic's government contract — it forces every company doing business with the U.S. military to cut commercial ties with Anthropic. Canadian companies in integrated supply chains face the same pressure. This is extraterritorial economic coercion applied to the private sector of allied nations, using a national security mechanism designed for adversary states.

Surveillance infrastructure

Five Eyes intelligence sharing means that surveillance capabilities developed by the United States are accessible to partner nations. If the U.S. removes AI safety guardrails and deploys mass data collection tools, that infrastructure doesn't respect borders. Canadian citizens' data, movements, communications, and financial transactions are accessible through commercial data brokers — the same category of data the Pentagon reportedly sought to analyze using Anthropic's technology.

Technology sovereignty

Canada is a consumer, not a producer, of frontier AI. Every major AI model used by Canadian governments, businesses, and citizens is built by American companies subject to American political dynamics. The Anthropic case demonstrates that those companies can be coerced, blacklisted, or compelled to modify their products for political reasons. Any Canadian organization relying on American AI infrastructure — which is effectively all of them — is now subject to the policy preferences of the U.S. executive branch, whether or not those preferences align with Canadian law, values, or interests.

The democratic example

Perhaps most fundamentally, Canada shares a border and a civilization with the country that has served as the primary reference point for democratic governance for over two centuries. When that reference point degrades, it affects the global democratic ecosystem. Canada has historically operated within a framework of shared democratic values with its closest ally. When those values diverge — not in marginal policy differences, but in fundamental questions about the rule of law, institutional independence, and the limits of executive power — Canada must decide whether to follow, resist, or chart an independent path.

The Lawfare Analysis

For those interested in the legal specifics, the Lawfare Institute published a detailed analysis of what the Defense Production Act can and cannot lawfully do in this context. The DPA is a Korean War-era statute that gives the president broad authority to direct private industry for national defense. The Biden administration previously invoked it (Title VII, information-gathering authority) to require AI companies to share safety test results. The Pentagon's threat against Anthropic invoked Title I — the statute's core compulsion power. Legal scholars have noted this would represent "an enormous escalation" from any prior use.

The DPA has never been used to compel a company to produce a product it has deemed unsafe, or to dictate its terms of service. Multiple experts have described the threatened use as "without precedent under the history of the DPA." Anthropic has stated it will challenge the supply chain risk designation in court, arguing it is "legally unsound" and sets "a dangerous precedent for any American company that negotiates with the government."

The legal question may ultimately be less important than the structural one. Even if Anthropic prevails in court, the demonstration effect — that the U.S. government will deploy its most powerful economic weapons against domestic companies for contractual disagreements — has already occurred. The precedent exists whether or not the courts validate it.

Questions for Discussion

  1. The "legally grey coup" question: When legal mechanisms are used for purposes fundamentally different from their design, at what point does "aggressive governance" become "institutional subversion"? Is there a bright line, or is it inherently a matter of judgment? Who gets to make that judgment?
  2. The private sector's role: Should companies like Anthropic be making decisions about what the military can and cannot do? Or is the alternative — the military having unrestricted access to the most powerful cognitive technology ever built — more dangerous? Is there a third option?
  3. The compliance selection effect: If the companies that survive in this environment are those most willing to comply, and those with the strongest safety commitments are marginalized, what does the AI industry look like in five years? What does the technology look like?
  4. Canadian independence: Should Canada develop independent AI capabilities for classified government work, even at significant cost, to avoid dependence on American technology subject to American political dynamics? What would that require?
  5. The normalization question: How do citizens recognize democratic erosion when it happens through legal mechanisms rather than dramatic events? What institutions, if any, are responsible for sounding the alarm? What happens when those institutions are themselves under pressure?
  6. Historical parallels: The article draws comparisons to Hungary, Brazil, and other cases of democratic backsliding. Are these comparisons illuminating or exaggerated? What are the important differences between the U.S. case and other backsliding democracies?
  7. The ally question: When your closest ally's governance begins to diverge from shared democratic norms, what is the appropriate response? Quiet diplomacy? Public criticism? Institutional distancing? Building independent capacity? Some combination?

Sources: Carnegie Endowment for International Peace, "U.S. Democratic Backsliding in Comparative Perspective" (2025) | The Century Foundation, "Democracy Meter" (2026) | Harvard Kennedy School, Chenoweth & Levitsky, "The Breakdown" series (2025) | Brennan Center for Justice, "International Lessons on Democratic Backsliding and Recovery" (2025) | UCLA School of Law, Cummings, "Stopping Autocratic Legalism in America — Before It Is Too Late" (2025) | California Law Review, "Lawyers in Backsliding Democracy" (2024) | Lawfare Institute, "What the Defense Production Act Can and Can't Do to Anthropic" (2026) | Holland & Knight, "Defense Contractors Face New Scrutiny" (2026) | Congressional Research Service, "Reauthorizing the Defense Production Act" (2026) | Axios, CNBC, Washington Post, TechCrunch, DefenseScoop, CBS News (February 2026 coverage) | Anthropic official statements | Forbes, "OpenAI and Google Staffers Sign Petition" (2026)

This article is published on CanuckDUCK's Pond forum as part of our commitment to balanced, multi-perspective civic discourse. The platform presents arguments from all sides and invites readers to form their own judgments. If you disagree with the framing, the analysis, or the conclusions, the comment section exists for exactly that purpose.

--
Consensus
Calculating...
0
perspectives
views
Constitutional Divergence Analysis
Loading CDA scores...
Perspectives 0