Every system for collective decision-making can be gamed. Manipulated. Captured. This isn't cynicism—it's the starting point for building systems that actually work.
Most platforms hide their vulnerabilities, hoping obscurity provides protection. We believe the opposite: informed communities are resilient communities. If you understand how democratic systems break, you can help protect them. You can spot manipulation. You can contribute to defense. You can calibrate your trust appropriately.
This series publishes what we know about breaking voting systems because we're trying to build better ones. Transparency about failure modes is how we invite scrutiny, earn trust, and improve.
The Articles
1. The Sybil Problem: Why "One Person, One Vote" Is Harder Than It Sounds
The foundational attack. Create fake identities, vote multiple times, manufacture consensus that represents nothing real. Named after a woman with sixteen personalities, Sybil attacks exploit the gap between physical identity and digital presence.
Key insight: Every defense sits on a spectrum between complete anonymity (maximally inclusive, maximally exploitable) and complete identification (secure but exclusionary). Different decisions warrant different identity assurance levels. There is no neutral choice.
2. Money Talks: Vote Buying, Whale Dominance, and Plutocratic Drift
The obvious attack—cash for votes—is less dangerous than the sophisticated versions. Information control, agenda setting, and system design let money shape decisions without touching individual voters.
Key insight: Vote-selling is a symptom, not a cause. Criminalizing sellers punishes poverty. The productive intervention addresses buyers, builds systems where buying doesn't work, and confronts the desperation that makes selling attractive.
3. The Timing Game: When You Vote Matters As Much As How
Early voters reveal information; late voters exploit it. Sniping, cascade effects, strategic abstention—time creates asymmetric advantages that naive systems ignore.
Key insight: Every timing defense trades against something valuable. Hide votes until the end and you lose deliberation. Remove deadlines and you lose decisiveness. There is no configuration without costs.
4. Collusion & Cartels: When Voters Work Together Against Everyone Else
Coordination is democracy's lifeblood and its capture mechanism. The line between legitimate coalition and illegitimate cartel isn't always visible. Sometimes it isn't a line at all.
Key insight: The best cartels never get caught. Sophisticated coordination looks like organic consensus. The absence of visible manipulation doesn't mean safety—it might mean competent manipulation.
5. Conviction Voting: Promise & Peril
Time-weighted commitment rewards patience and filters drive-by attacks. But patient attackers can accumulate conviction silently. Lock-in dynamics trap participants in suboptimal positions. The mechanism that solves some problems creates others.
Key insight: Conviction voting works when your threat model is impatient external actors. It fails when adversaries can plan in months rather than minutes, or when contexts change faster than commitment cycles allow.
6. Quadratic Mechanisms: Cost Curves Against Plutocracy
Quadratic voting and funding compress wealth advantages through mathematical elegance. One hundred times the resources yields ten times the voice, not one hundred. But the math assumes one identity per participant—and collapses entirely when that assumption fails.
Key insight: Quadratic mechanisms are genuinely innovative and genuinely fragile. They require solved identity to function. In anonymous adversarial contexts, they're invitations to exploitation.
7. The Countermeasures Toolkit: Building Defense in Depth
No single mechanism stops all attacks. Layered defense—identity, influence distribution, temporal controls, transparency, human oversight, structural boundaries—creates overlapping coverage where gaps don't align.
Key insight: Security is a stack, not a solution. Every layer is imperfect. The combination can be adequate when individual components aren't.
8. Why Perfect Security Isn't The Goal: Building for Resilience, Not Invulnerability
Perfect security is impossible, and pursuing it is counterproductive. Resilience—surviving and recovering when attacks succeed—matters more than prevention alone.
Key insight: The most resilient systems aren't the ones with the cleverest mechanisms. They're the ones with communities that care enough to maintain them. Human engagement is irreplaceable.
Who Should Read This
- Builders designing governance systems for communities, organizations, or platforms
- Participants in any system where collective decisions matter
- Researchers studying mechanism design, democratic theory, or security
- Citizens who want to understand how democratic infrastructure works—and fails
What This Series Doesn't Cover
We focused on security—how systems get attacked and defended. Equally important questions remain:
- Participation: How do you get people to engage in the first place?
- Deliberation: How do you ensure discussion quality, not just decision security?
- Legitimacy: Why should anyone accept these decisions as binding?
- Implementation: How do you actually build these systems in practice?
These deserve their own treatment. This series provides foundation; the building continues.
An Invitation
CanuckDUCK publishes this analysis because we're building civic infrastructure for Canadian communities. We don't claim to have solved these problems. We claim to be working on them honestly.
The systems we build will have vulnerabilities. We'll find some before attackers do and some after. We'll fix what we can and adapt to what we can't. We'll be transparent about imperfection because you deserve to know what you're participating in.
If that approach resonates—if you want democratic infrastructure built with open eyes about what it's up against—we invite you to participate.
Criticism welcome. Contributions welcome. The project of democratic infrastructure belongs to everyone willing to work on it.
Series: "Breaking Democracy (So We Can Fix It)" Published by CanuckDUCK Research Corporation Licensed for sharing with attribution