Node: Collective Consensus Protocol
Abstract. Node is a peer-to-peer governance protocol enabling cryptographically verified collective decision-making without centralized infrastructure. The system combines post-quantum digital signatures (Dilithium3), supermajority quorum sensing, and human-in-the-loop verification to achieve consensus that reflects genuine agreement rather than algorithmic determination or plutocratic token weighting. Grounded in behavioral economics research on cooperation and Elinor Ostrom's design principles for commons governance, Node represents a deliberate departure from traditional power structures toward infrastructure that resists capture by wealth, credentials, or democratic majorities alike.
1. Introduction
1.1 The Failure of Existing Models
Contemporary approaches to collective decision-making suffer from fundamental structural deficiencies that transcend implementation details. Centralized platforms exercise unilateral control over group governance through opaque algorithmic systems, concentrating power in entities whose incentives diverge from user welfare. Data flows upward; behavioral modification flows downward. Users participate in systems they cannot inspect, governed by rules they cannot influence.
Decentralized autonomous organizations (DAOs), despite theoretical promise, reproduce familiar pathologies in novel form. Token-weighted voting creates plutocracy: governance by wealth concentration. Those with capital determine outcomes for those without. Participation rates remain chronically low because rational actors recognize that individual votes carry negligible weight against whale holdings. The result is governance theater - democratic aesthetics concealing oligarchic function.
Meritocratic alternatives fare no better. Systems that weight influence by demonstrated expertise or contribution create hegemonic meritocracy: rule by those who define merit. Credentialing bodies become gatekeepers; early contributors cement permanent advantage; the criteria for "meaningful contribution" encode incumbent preferences. Merit becomes a mechanism for exclusion dressed in the language of fairness.
Even well-intentioned democratic designs risk hyper-capitalist democracy: systems where voice can be purchased, attention can be manufactured, and participation itself becomes a commodity. When engagement metrics drive influence, those with resources to generate engagement dominate those who merely have something to say.
1.2 Design Intent
Node represents a deliberate step away from traditional power organization. The protocol does not attempt to perfect democracy, meritocracy, or markets. Instead, it creates infrastructure for collective agreement - conditions under which groups can reach genuine consensus without that consensus being captured by any subset of participants.
This requires solving multiple problems simultaneously: preventing wealth from purchasing influence (anti-plutocracy), preventing credentials from gatekeeping participation (anti-meritocracy), preventing majorities from suppressing minorities (anti-tyranny), and preventing the infrastructure itself from becoming a lever of control (anti-capture). The protocol's technical choices - supermajority thresholds, trust decay, human-in-the-loop verification, witness requirements - each address specific failure modes identified in existing systems.
1.3 Research Foundation
Node's design draws on empirical research in behavioral economics and institutional analysis, particularly work on cooperation dynamics and commons governance.
Behavioral Economics on Cooperation
Van den Assem et al. (2012): Analysis of the Golden Balls television show revealed 53% cooperation rates in high-stakes prisoner's dilemma scenarios, demonstrating that cooperation emerges reliably even under adversarial incentives when participants engage face-to-face with mutual stakes.
De Quervain et al. (2004): Neuroimaging studies show that punishing defectors activates reward circuitry - punishment feels good. However, cooperation requires cognitive effort and emerges from deliberate choice rather than instinct. Protocol design must account for the asymmetry between punishment (emotionally rewarding) and cooperation (cognitively costly).
Fehr & Gächter (2000, 2002): Experimental work established that punishment effectiveness depends on cost-to-impact ratio. Optimal outcomes combine credible punishment threat with positive rewards for cooperation. Pure punishment regimes degrade; pure reward regimes invite exploitation.
Herrmann et al. (2008): Cross-cultural studies documented antisocial punishment - the phenomenon where some populations punish cooperators rather than defectors. Punishment mechanisms only function in contexts with strong civic norms; absent such norms, punishment tools become weapons for norm destruction.
These findings inform Node's graduated trust system, witness requirements, and review mechanisms. The protocol assumes neither universal cooperation nor universal defection, instead creating conditions where cooperation becomes the rational strategy for participants operating in good faith while limiting damage from those who are not.
Ostrom's Design Principles for Commons Governance
Elinor Ostrom's empirical analysis of successful commons governance identified eight design principles present in enduring self-governing institutions. Node implements each:
- Clearly defined boundaries. Pool membership requires shared passphrase knowledge and mutual approval. Who belongs is explicit; what resources are governed (expressions, proposals, pins) is defined by the protocol.
- Proportional equivalence. Benefits scale with contribution. XP accumulates through verified engagement; trust scores reflect interaction quality; staking capacity ties to demonstrated participation rather than capital.
- Collective choice arrangements. Those affected by rules participate in rule-making. Decay policies, pin decisions, and prune proposals require quorum approval from pool members. No external authority imposes governance.
- Monitoring. Behavior is observable. Acknowledgments are signed and timestamped; review scores are recorded; trust updates are deterministic. Participants can verify claims about contribution.
- Graduated sanctions. Trust scores adjust incrementally (0.7 history weight + 0.3 new observation). Poor behavior degrades reputation over time rather than triggering immediate expulsion. Recovery is possible.
- Fast and fair conflict resolution. The mutual approval protocol and review system provide local mechanisms for dispute resolution. Elaboration exchange surfaces intentions; review consensus establishes community standards.
- Local autonomy. Pools operate independently. No central authority validates pool legitimacy or overrides local decisions. The protocol provides infrastructure; communities provide governance.
- Polycentric governance. Testimony enables cross-pool attestation. Content can accrue legitimacy across trust domain boundaries without requiring global consensus. Nested validation emerges organically.
These principles are not aspirational; they are implemented in protocol mechanics. The question is not whether Node follows Ostrom's principles but whether the specific implementations achieve the intended effects.
2. Technical Architecture
2.1 Identity System
Each node generates a Dilithium3 keypair upon initialization, producing approximately 2KB public keys resistant to both classical and quantum cryptanalytic attacks. The decentralized identifier (DID) derives from the public key: did:diagon:<hex(pubkey[0:16])>. DID-pubkey binding verification occurs at handshake, preventing identity spoofing.
2.2 Epigenetic Trust Marks
Nodes maintain trust scores ranging from 0.0 to 1.0 for each known peer. Scores initialize at 0.5 and update via exponential moving average: score = score × 0.7 + quality × 0.3, where quality reflects the outcome of specific interactions. Unverified peers are capped at 0.6. A minimum trust of 0.4 is required to submit proposals.
| Parameter | Value |
|---|---|
| Default Trust | 0.5 |
| History Weight | 0.7 |
| New Observation Weight | 0.3 |
| Unverified Cap | 0.6 |
| Proposal Threshold | 0.4 |
2.3 Pool Architecture
Trust domains ("pools") are defined by shared passphrases processed through Argon2id (64MB memory, 3 iterations, 4 threads). Nodes connect only to peers presenting matching pool commitments. Three genesis pools provide bootstrap infrastructure; additional pools emerge through passphrase distribution outside the protocol.
3. Consensus Mechanism
3.1 Content Addressing
Expressions (proposals, messages, votes) receive content identifiers (CIDs) computed as SHA256(data || 256-bit random || timestamp). This construction ensures uniqueness while enabling content deduplication. The expression store maintains a Merkle root commitment over the log for integrity verification.
3.2 Quorum Sensing
Proposal passage requires accumulating weighted votes exceeding a dynamic threshold: max(⌈(peer_count + 1) × 0.67 × 1000⌉, 1000). Vote weights derive from trust: max(trust × 1000, 100), establishing a minimum influence floor for all participants.
Votes decay exponentially with a 5-minute half-life, requiring sustained agreement rather than transient majorities. Constraints include one vote per DID, prohibition of self-voting, and mandatory signatures on all votes.
3.3 Content Lifecycle
Expressions persist subject to configurable decay policies. Inactivity triggers candidacy for pruning, which itself requires quorum approval. Critical content may be pinned through governance, exempting it from decay. Store capacity limits (100,000 expressions, 10,000 proposals) prevent unbounded growth.
4. Witness System
The witness system provides cryptographic attestation of human attention to content, addressing the authenticity problem in digital engagement metrics.
4.1 Acknowledgment (Intra-Pool)
Within a pool, nodes may witness content through acknowledgment. The process requires: (1) initiating observation via witness command, (2) maintaining attention for minimum dwell time (5 seconds), and (3) issuing signed acknowledgment optionally including reflection text. Acknowledgments with substantive reflections (10+ characters) receive enhanced weight (300 vs. 100 base).
4.2 Testimony (Inter-Pool)
Content may accrue cross-pool attestation through testimony. A node in Pool B may testify to content originating in Pool A, providing attestation text (minimum 10 characters) signed with their credentials. Testimony weight (500 per pool) enables content validation across trust domain boundaries.
4.3 Human-in-the-Loop Review
The protocol incorporates structured quality review with five evaluation dimensions: relevance, substance, clarity, originality, and effort. Each dimension accepts scores 1-5. Reviewers must provide justification (minimum 10 characters) and may flag content for spam, plagiarism, or off-topic violations.
Consensus emerges through score aggregation with outlier detection (threshold: 2.0 standard deviations). Reviewer reputation develops through alignment with consensus outcomes, creating incentive compatibility. Apprentice reviewers complete 5 reviews before receiving full weight.
5. Federated Audit Registry
The audit system enforces social trust through a no-knock right: any participant may challenge content they believe harmful, staking their reputation on the claim. Human reviewers—not algorithms—assess disputes. Verdicts propagate across the network as a shared registry of accountability.
Auditing exists to enforce social norms, not legal compliance. The protocol makes no claim about what constitutes harm—communities define their boundaries. What the protocol guarantees is process: challenges are heard, reviewers are impartial, verdicts are permanent and public, and consequences follow deterministically from outcomes.
5.1 The No-Knock Right
Any node holding at least 10 XP may initiate an audit against any content in their pool. No permission is required. No warning is given. This design reflects a core principle: the right to challenge must be unconditional, or it becomes a privilege that protects incumbents.
However, auditing is not free. Initiating a challenge requires staking XP—reputation accumulated through verified participation. Frivolous or malicious audits cost the auditor; successful identification of harmful content rewards them. The mechanism creates skin in the game without gatekeeping who may participate.
Stake Calculation
Audit cost scales with the challenger's total XP and audit history, creating proportional skin-in-the-game:
base_cost = total_xp ÷ 3
escalation = 1 + (0.5 × decayed_audit_count)
stake = base_cost × escalation, capped at 99% of total XP
This formula ensures challengers stake proportionally to their accumulated reputation. A user with 300 XP stakes 100 XP on their first audit; a user with 3,000 XP stakes 1,000 XP. Prior audits decay with a 30-day half-life, allowing reputation recovery while maintaining accountability for patterns of frivolous challenges.
| Parameter | Value |
|---|---|
| Minimum XP to Audit | 10 |
| Base Stake | 33% of total XP |
| Per-Audit Escalation | +50% |
| Audit Count Half-Life | 30 days |
| Maximum Stake Fraction | 99% |
| Content Cooldown | 24 hours |
5.2 Harm Categories
When initiating an audit, the challenger specifies the alleged harm category. The protocol defines seven categories reflecting consensus harms across jurisdictions and cultural contexts:
| Category | Code | Description |
|---|---|---|
| Illegal | 1 | Content violating criminal law (CSAM, terrorism, trafficking) |
| Doxxing | 2 | Privacy violation, exposing personal information without consent |
| Violence | 3 | Credible threats of violence against individuals or groups |
| Harassment | 4 | Targeted abuse, coordinated harassment campaigns |
| Malware | 5 | Malicious code, exploits, or technical attack vectors |
| Exploitation | 6 | Non-consensual intimate imagery, financial exploitation |
| Misinformation | 7 | Demonstrably false claims causing material harm |
Challengers must provide written justification (20–2000 characters) and a harm note (10–1000 characters) explaining the specific concern. These requirements filter low-effort challenges while creating an evidentiary record.
5.3 Reviewer Assignment
Upon challenge initiation, the protocol assigns an eligible reviewer from the pool. Eligibility requires:
- Minimum 10 XP (demonstrated participation)
- Not the content author (conflict of interest)
- Not the challenger (impartiality)
- Active pool membership
Assignment is deterministic based on available eligible reviewers. If no eligible reviewer exists, the audit cannot proceed—small pools must develop sufficient participation before the audit mechanism activates.
5.4 Review Process
Assigned reviewers have 48 hours to render a verdict. The reviewer examines the challenged content, the challenger's justification, and the harm category claim. Three outcomes are possible:
Content Acceptable
The reviewer determines the content does not constitute the alleged harm. The challenger loses their staked XP. The content author is vindicated. The verdict is recorded but no blacklisting occurs.
Content Harmful
The reviewer confirms the harm allegation. The content author's DID is blacklisted. The content hash is added to the harmful content registry. The challenger receives their stake back plus 50 XP reward. The reviewer receives 40 XP.
Expired
The reviewer fails to respond within 48 hours. The challenger's stake is refunded. No verdict is rendered. The content remains in its prior state.
| Outcome | Challenger XP | Reviewer XP | Author Consequence |
|---|---|---|---|
| Acceptable | −stake | +15 | None |
| Harmful | +50 | +40 | Blacklisted |
| Expired | refund | 0 | None |
5.5 The Federated Registry
Audit verdicts propagate across pool boundaries through a shared, federated registry. This registry contains three categories of public information:
Blacklisted DIDs
Decentralized identifiers of authors whose content was confirmed harmful. DIDs are public; the specific offense is recorded but the content itself is not distributed.
Harmful Content Hashes
SHA-256 hashes of confirmed-harmful content. Nodes can check incoming content against this registry without possessing or distributing the harmful material.
Verdict Records
Complete audit verdicts including: challenge ID, content CID (not content), target DID, challenger DID, reviewer DID, outcome, justification, XP changes, origin pool, timestamp, and cryptographic signature. This creates a permanent, auditable record of moderation decisions—accountability flows in all directions.
Pools synchronize blacklist state every 5 minutes. Attestations from genesis pools are accepted without local verification (bootstrap trust); attestations from other pools require signature verification against known peers. This enables rapid propagation while preventing injection of false verdicts.
5.6 Accountability Properties
The audit system achieves several accountability guarantees absent from centralized moderation:
- Challenger accountability: XP stake creates cost for frivolous challenges
- Reviewer accountability: Verdicts are signed and permanent; patterns are observable
- Author accountability: Blacklisting follows from peer judgment, not algorithmic flag
- System accountability: All decisions are recorded; no shadow bans or silent removals
The content itself is never distributed through the registry—only hashes and metadata. This prevents the audit system from becoming a vector for harmful content distribution while maintaining verifiability.
6. Treasury Activity: Staking and Yield
Quorum IO's primary treasury mechanism binds token economics to verified network participation. This design rejects passive staking models and speculative yield farming in favor of rewarding demonstrated human engagement through existing protocol systems.
Staking capacity and yield generation are functions of participation quality, not capital accumulation. The protocol's existing XP, witness, and review systems provide measurement infrastructure; token mechanics layer atop these verified engagement signals.
6.1 XP-Backed Staking
Staking capacity scales with accumulated experience points (XP). XP accrues through qualified content viewing (1 XP per view exceeding 30-second threshold, subject to 5-minute cooldown) and quality review submissions. This construction ensures that staking power reflects genuine network contribution. Higher XP totals unlock proportionally larger stake positions.
6.2 Witness Yield Generation
Yield accrues to staked positions through acknowledgment actions satisfying protocol requirements:
- Minimum 5-second dwell time demonstrating content engagement
- Valid cryptographic signature binding attestation to identity
- Optional reflection text providing qualitative attestation
Reflections meeting the 10-character minimum earn 3× base weight (300 vs. 100), translating to proportionally enhanced yield. Yield rate correlates with both acknowledgment frequency and quality distribution.
6.3 Reviewer Yield Multipliers
Participants maintaining quality reviewer status receive enhanced yield multipliers on staked positions. Qualification requires: (1) minimum reputation score of 0.2, (2) completion of apprentice period (5 reviews), and (3) sustained consensus alignment as measured by the outlier detection system. The five-dimension scoring protocol ensures evaluation quality; multiplier magnitude scales with reputation.
6.4 Anti-Gaming Mechanisms
Multiple protocol features prevent mechanical exploitation of yield mechanics:
Temporal Controls
- XP cooldowns (300 seconds between qualified views)
- Minimum dwell time requirements (5 seconds)
- Vote decay (5-minute half-life)
- Review staleness (604,800 seconds)
Quality Controls
- Elaboration minimums (10 characters)
- Outlier detection for review scores
- Epigenetic trust score tracking
- Rate limiting (100 msgs / 60s)
The apprentice reviewer system creates gradual onboarding, preventing immediate exploitation of review yield multipliers. Interaction limits (5 per content item) bound maximum extractable value per expression.
7. Security Model
| Threat Vector | Mitigation |
|---|---|
| Message Flooding | Per-peer rate limiting (100 messages / 60 seconds) |
| Replay Attacks | Nonce tracking with time-bounded validity windows |
| Sybil Voting | Human elaboration requirement, mutual approval protocol |
| Self-Voting | Protocol rejection with SelfVoteProhibited error |
| Memory Exhaustion | Bounded stores (100K expressions, 10K proposals, 1K pins) |
| DID Spoofing | DID-pubkey binding verification at handshake |
| Pool Brute Force | Argon2id with 64MB memory-hard derivation |
| Message Interception | E2E encryption (X25519 + ChaCha20Poly1305) for DMs |
| DHT Spam | Registration limits (5 per hour per DID) |
| Fake Attestations | Dwell time requirements, signature verification |
| Review Manipulation | Reputation tracking, outlier detection, apprentice period |
| Audit Spam | XP stake requirement, escalating costs, content cooldowns |
| False Verdicts | Reviewer signature verification, DID-pubkey binding, genesis pool trust |
| Blacklist Injection | Attestation signatures, cross-pool verification, federation sync |
7.1 Cryptographic Primitives
| Algorithm | Application |
|---|---|
| Dilithium3 | All signatures (post-quantum secure) |
| SHA-256 | Content addressing, Merkle commitments |
| Argon2id | Pool passphrase derivation |
| X25519 | DM key exchange |
| ChaCha20Poly1305 | DM payload encryption |
8. Connection Protocol
Node connections proceed through a multi-phase handshake requiring mutual consent:
- Hello Exchange: Nodes exchange DIDs, public keys, and pool commitments
- Challenge-Response: Receiver issues nonce; initiator returns signature
- Elaboration: Both parties provide human-readable statements of intent
- Mutual Approval: Both parties explicitly approve after reviewing elaborations
- Synchronization: Expression stores reconcile upon authenticated connection
This protocol ensures that connections reflect deliberate human decisions rather than automated peer discovery, supporting the system's emphasis on verified participation.
9. Conclusion
Node provides governance infrastructure addressing structural failures in both centralized and existing decentralized decision-making systems. The protocol's combination of post-quantum cryptography, supermajority quorum sensing, and human-in-the-loop verification creates conditions for collective agreement that resists manipulation while remaining accessible to genuine participation.
More fundamentally, Node represents an attempt to build infrastructure that cannot be captured - by wealth (plutocracy), by credentials (meritocracy), by majorities (democracy), or by its own operators. The behavioral economics research and Ostrom's principles provide theoretical grounding; the implementation provides testable hypotheses about what actually works.
The treasury model extends these principles to token economics, ensuring that staking power and yield generation reflect verified engagement rather than capital concentration. By binding economic incentives to the protocol's existing participation metrics, the system aligns individual rewards with collective governance quality.
Current implementation status: v0.9.8 alpha with 122 passing tests across consensus, cryptography, content transfer, and review subsystems. The protocol is implemented in Rust (~15,000 lines) as a single-binary deployment.
References
De Quervain, D. J. F., et al. (2004). The neural basis of altruistic punishment. Science, 305(5688), 1254–1258.
Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90(4), 980–994.
Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137–140.
Herrmann, B., Thöni, C., & Gächter, S. (2008). Antisocial punishment across societies. Science, 319(5868), 1362–1367.
Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press.
Van den Assem, M. J., Van Dolder, D., & Thaler, R. H. (2012). Split or steal? Cooperative behavior when the stakes are large. Management Science, 58(1), 2–20.