draft spec

Overlay + input-script-data spec

Single source of truth lives in peck-docs/services/. Rendered here for convenience.

peck-social-overlay — DRAFT

Status: Draft. Third companion to peck-social-v1.md (bridge profile, OP_RETURN canonical) and peck-social-token.md (sovereign token-state model). Author: Thomas (kryp2) and contributors. Started: 2026-05-05. License: MIT.

0. Why this document exists alongside v1 and token-state

peck-social-v1 covers OP_RETURN-resident attribution-grade content — the bridge to the existing 6+ years of Bitcom tooling.

peck-social-token covers stateful UTXOs as ownership-bearing entities — the sovereign-on-chain end of the spectrum.

This document, peck-social-overlay, covers the third architectural channel that has been overlooked in the BSV social ecosystem: data carried in transaction input scripts, parsed by specialized overlays that sync state between each other rather than each rebuilding from chain-scratch. It is the "overlay-first" model — chain as truth-anchor, federated overlays as the read+sync layer, input scripts as a payment-coupled write channel.

These three profiles together describe the full design surface peck-social-class applications can choose from. Implementations select per entity which channel fits, guided by the policies in each spec and the cross-cutting hybrid-policy in §6 below.

This profile is intentionally written to align with the BRC-22 overlay direction the BSV ecosystem is moving toward in 2026. As of May 2026:

  • GorillaPool became the first Teranode-native miner (May 1, 2026), shifting the underlying chain architecture toward subtree-based block construction.
  • Overlay developers (Deggen, John Calhoon, Brayden Langley, others) are publicly discussing GASP for overlay-node sync, per-topic Merkle trees, subtree-based syncing between overlays, and the "99.999+% peer acceptance" consensus semantics that overlay-federation will need at scale.
  • A BSVA-commissioned study into game-theoretic dynamics of overlay-sync exists but has been shelf-ware due to organizational changes.

This profile assumes that direction continues. It is a bet on overlay-federation as a durable architectural layer, not just an implementation detail.


1. Layer stack

+---------------------------------------------------+
|  peck-social-overlay (this document)              |  input-script-data channel,
|                                                   |  overlay-federation policies,
|                                                   |  topic-Merkle-tree state-roots
+---------------------------------------------------+
|  peck-social-v1 (OP_RETURN bridge)                |  ←| together: full data-channel
|  peck-social-token (token-state)                  |  ←| stack peck-social applications
|                                                   |    can produce/parse
+---------------------------------------------------+
|  BRC-22 overlay services + GASP sync              |  per-topic state derivation,
|                                                   |  peer-to-peer sync between overlays
+---------------------------------------------------+
|  BRC-100 wallets (BRC-43, 52, 77, 104)            |
+---------------------------------------------------+
|  BSV chain (Teranode-era, subtree-block-format)   |
+---------------------------------------------------+

The vertical position is significant: this profile deliberately sits at the same level as v1 and token, treating the overlay layer as a peer of the on-chain channels rather than a derived view. In practice this means: a peck-social-overlay implementation MUST publish overlay-state-roots to chain at intervals so the overlay-derived state is itself anchored, not just memo'd.


2. Input-script-as-data — the channel

Bitcoin's standard P2PKH unlocking script is <signature> <pubkey>. To carry data, applications can prepend OP_DROP-paired data pushes:

<data_chunk_1> OP_DROP
<data_chunk_2> OP_DROP
...
<data_chunk_n> OP_DROP
<signature>
<pubkey>

Each OP_DROP removes the data from the stack at script-execution time, so the trailing P2PKH check (HASH160 + EQUALVERIFY + CHECKSIG) succeeds normally. The output created by the spend is whatever the spender needs (P2PKH change, payment to recipient, token-state continuation, etc.) — data ride along in the spend, not in a separate burn output.

2.1 Properties

Aspect Behavior
Output-set impact Zero. Data live in the spend, not in a dedicated output.
Burned satoshis Zero. The input UTXO funds normal outputs; no value is destroyed.
Per-byte fee Same rate as OP_RETURN — miners charge by byte regardless of where bytes live.
Discoverability via legacy Bitcom parsers None — they scan output OP_RETURNs only. peck-social-overlay-aware indexers MUST scan input scripts.
Size limit Miner-policy-dependent. Pre-Teranode typical was lower than OP_RETURN. Post-Teranode (May 2026+) policies are still settling — implementations SHOULD verify against current miner manifests rather than assume historical numbers.
Atomic with payment Yes — the spend that carries data IS a payment. Naturally suits "action with monetary weight".
Atomic with token-state spend Yes — input-script-data can ride a stateful contract spend, attaching context to the state transition.

2.2 Where input-script-data fits

The defining property is that the data is bound to the spend. This makes it the natural channel for actions where payment + record are conceptually one event:

  • Tip + comment: send sats to a creator, attach a comment. Today this requires an OP_RETURN output AND a P2PKH output. Input-script-data collapses to a single payment with comment in the input.
  • Pay-to-follow / subscriber-mint: pay a creator and become a subscriber. The "follow" record IS the spend that pays.
  • Paywall receipts: client pays for content access; the receipt (showing what was paid for, when, by whom) lives in the input that funded the payment.
  • Anointing / endorsement: pay-to-endorse a piece of content. The endorsement record carries the value of the endorsement directly.
  • BRC-104 fetch-fee transactions: pay-per-fetch on overlay endpoints. The fetched-resource identifier ride in the input that funds the fetch fee.

Entities that are not good fits for input-script-data:

  • Pure-publication actions (posting a status, tagging a TX) — there's no payment to attach to. Faking a self-spend just to publish data is awkward.
  • Bulk content (large media) — input-script size limits historically lower than OP_RETURN; BRC-104-references serve large content better.
  • Discoverability-via-Bitcom-tools — legacy parsers won't see input-script-data; for content that should be readable by every Bitcom-aware indexer, OP_RETURN remains canonical.

2.3 Format

Input-script-data within this profile follows the same structural conventions as Bitcoin Schema in OP_RETURN — same B / MAP / AIP push order, same canonical AIP signing rules from peck-social-v1 §2, same MAP-key namespace (2/peck.to-social/v1.<key>) from §3. The only difference is the location: each push that would have lived between OP_FALSE OP_RETURN and the script tail in OP_RETURN-form lives between the start of the unlocking script and the <signature> <pubkey> tail in input-script-form, with each push followed by OP_DROP.

A complete input-script-data unlock looks like:

<PROTO_B> OP_DROP
<content> OP_DROP
<media-type> OP_DROP
<encoding> OP_DROP
<PIPE 0x7c> OP_DROP
<PROTO_MAP> OP_DROP
<SET> OP_DROP
<key1> OP_DROP <val1> OP_DROP
<key2> OP_DROP <val2> OP_DROP
...
<PIPE> OP_DROP
<PROTO_AIP> OP_DROP
<BITCOIN_ECDSA> OP_DROP
<signing_address> OP_DROP
<signature_b64_BSM_compact> OP_DROP
<spender_signature>
<spender_pubkey>

The trailing <spender_signature> <spender_pubkey> is the actual P2PKH unlock for whichever output is being spent. Note that the AIP signing key (signing_address) and the spender's key may differ — AIP attests the content, the spender's signature unlocks the input. This is the source of input-script-data's interesting property: the writer of the content is on AIP, the funder of the action is on the input-spend. Often the same key, but they can differ for sponsored writes.

2.4 AIP preimage — same canonical form

The AIP signature in input-script-data covers the same canonical preimage shape as in OP_RETURN-data: every preceding pushdata-content byte (B + MAP + AIP-header), with pipes as 0x7c data bytes, ending at signing_address. The OP_DROPs are NOT part of the preimage — they are script-execution opcodes, not pushdata content. This keeps the verification logic identical between OP_RETURN and input-script channels — the verifier walks the chunks, ignores OP_DROPs, reconstructs the preimage from the data pushes only.


3. Overlay-federation — the read+sync layer

3.1 Why overlays are first-class in this profile

In the v1 + token-state model, the overlay layer is implicitly an indexer that derives queryable state from chain. It is treated as a render of the canonical chain.

In this profile, the overlay layer is promoted to first-class. The reasons compound:

  1. Input-script-data is invisible to standard Bitcom-output-scanners. Specialized overlays that scan input scripts ARE the consumer of this channel.
  2. Token-state-tracking requires UTXO-set maintenance with per-entity templates. Overlays already maintain UTXO views.
  3. As BSV scales (Teranode era), single-overlay re-derivation from chain becomes uneconomic. Federation of overlays becomes necessary, not optional.
  4. BSVA, GorillaPool, John Calhoon, Deggen, Brayden Langley and others are publicly converging on this direction in 2026.

3.2 Topic-rooted state

Each overlay-managed topic (e.g., peck.to-social/profile, peck.to-social/post, peck.to-social/identity_claim) maintains a Merkle tree of its current state. The topic root is publishable:

  • Periodically anchored to chain via OP_RETURN or token-state output, so the topic-state-at-block-N is itself attested on chain.
  • Served at /v1/topic/{topic}/root on the overlay's HTTP API, so peer overlays can query "what is your current root for topic X?".
  • Compared between peer overlays — agreement on topic-root means agreement on topic-state.

3.3 GASP and subtree-based sync between overlays

Overlay-to-overlay sync uses BRC-22's GASP (General Asset Synchronization Protocol) with the additional convention that each overlay topic exposes its state as a subtree rather than as a flat list. Subtree-based sync mirrors the architecture Teranode has adopted at the chain layer.

A peer overlay bootstrapping topic=peck.to-social/profile from a known peer:

  1. Peer-overlay queries /v1/topic/{topic}/root on the source — gets current Merkle root.
  2. Peer-overlay queries /v1/topic/{topic}/snapshot?block={N} — gets a Merkle subtree representing topic state as of block N.
  3. Peer-overlay verifies the subtree against the root, integrates state.
  4. Peer-overlay subscribes to /v1/topic/{topic}/delta for new state-transitions since N.

For ongoing sync, overlays gossip subtree-deltas peer-to-peer. New TXs that touch topic-state are propagated as Merkle-proven state-transitions, not as raw TX-stream-replay-from-chain.

This makes new overlay-instance bootstrapping near-constant-time regardless of chain history depth, and it allows overlays at scale to specialize without each running a full chain-scanner.

3.4 Game-theoretic acceptance

Overlay-state agreement does not have proof-of-work. Federation maintains integrity via game theory: an overlay that publishes a state-root inconsistent with its peers becomes detectable and isolatable. Apps consuming overlay-state can require N-of-M peer-agreement before treating a state-record as authoritative.

The unfinished BSVA-commissioned study referenced in the Deggen/Calhoon X-thread is exactly this game theory. peck-social-overlay implementations SHOULD plan for an N-of-M policy from spec-time even if initial deployments run with a single overlay; the policy slot lets federation grow into the role without protocol breakage.

3.5 Idempotent overlay submission

Per Deggen's pain point: when a client submits a TX to overlay and misses the response, the same submission cannot today be safely repeated because the overlay may treat the second submit as a duplicate or as an inconsistent attempt.

This profile mandates idempotent overlay submission: each overlay submission carries a client-side nonce or content-hash (e.g., the AIP-signed preimage hash). Overlays MUST treat repeat submissions of the same nonce-or-hash as the same logical operation and return the same response on each repeat. This eliminates the "missed response, can't recover" failure mode.

Idempotency keys SHOULD persist for at least 24 hours so retries from intermittent connectivity succeed. After that, the overlay MAY garbage-collect the dedup record; any client that retries beyond 24 hours can construct a new submission.


4. State anchoring — how overlays prove themselves to chain

Overlay-state derivation without on-chain anchor would be just custodial database operation. To preserve the "chain as truth-anchor" promise, peck-social-overlay implementations periodically commit their topic-state-roots to chain.

Canonical anchor TX format:

OP_FALSE OP_RETURN
<PROTO_B>
<json_serialized_topic_roots>
<application/json>
<UTF-8>
|
<PROTO_MAP>
SET
2/peck.to-social/v1.app  peck-overlay
2/peck.to-social/v1.type overlay_anchor
2/peck.to-social/v1.block_height <N>
2/peck.to-social/v1.overlay_pubkey <hex>
|
<canonical AIP block signed by overlay_pubkey>

The B-section payload is a JSON document mapping each topic name to its Merkle root at block N. Verifiers can fetch the JSON, walk to the named overlay, request the corresponding subtree, and prove that the overlay's served state matches what was on-chain-anchored at that block.

Anchor frequency is implementation-dependent. Recommended: every block (cheap relative to the value), or every 100 blocks for less active topics. peck.to deployments anchor roughly hourly; peck-blog or peck-ink-canvas-sync overlays may anchor more or less frequently per their own activity.


5. Hybrid policy across all three channels

Combining v1 (OP_RETURN), token (UTXO state), and overlay (input-script-data + overlay-state) gives implementations a three-axis design space. Recommended defaults per entity:

Entity Primary channel Secondary Rationale
Profile token-state overlay-anchored root Ownership-bearing, mutable, low-frequency.
Post token-state overlay snapshot Author-owned, transferable, burnable.
Reply token-state overlay snapshot Same as Post, with parent-ref.
IdentityClaim token-state overlay anchor Cryptographic state-machine, bilateral revoke.
BRC52Cert token-state overlay anchor Issuer-controlled state with revocation via spend.
Like OP_RETURN overlay aggregate High volume, attribution-only. Optional aggregate count via overlay.
Follow / Unfollow OP_RETURN overlay graph High volume, relational, indexer-derived graph queries.
Tag OP_RETURN overlay tag-index Append-only metadata, query-friendly via overlay.
Channel message OP_RETURN overlay routing Group chat broadcast.
Tip + comment input-script-data overlay receipt Payment-coupled action; comment AND payment in one TX.
Subscribe (paid follow) input-script-data overlay subscription-index Payment-coupled; subscriber-state derived in overlay.
Paywall fetch-fee input-script-data overlay receipt Per-fetch billing; pre-existing peck.to paywall already does this.
Anointing / endorsement input-script-data overlay endorsement-index Payment-coupled act of endorsing content.
Direct message token-state OR OP_RETURN with BRC-78 overlay relay Encrypted; choice depends on whether DM-as-token gives value.

Apps SHOULD document their per-entity choice in their own service spec under peck-docs/services/ so cross-app composability is traceable.


6. Implementation expectations

6.1 Wallet layer

For input-script-data, BRC-100 wallets need helpers for: - Constructing unlock-scripts with OP_DROP-pattern data prefixes. - Producing canonical AIP signatures over the data preimage (same derSignatureToBsmCompactBase64 bridge as in v1 frontends). - Routing AIP-signing-key vs spender-signing-key correctly when they differ.

bitcoin-agent-wallet and any wallet-toolbox extension should add these primitives. peck-desktop is parked — no wallet-side changes there.

6.2 Indexer layer (peck-indexer-go and others)

Indexers consuming this profile must: - Parse input-script chunks, not only output OP_RETURNs. - Apply same canonical-AIP verification rules to input-script-data as to OP_RETURN-data. - Maintain per-topic state with Merkle-tree roots. - Serve overlay-sync endpoints (/v1/topic/{topic}/root, /snapshot, /delta). - Submit anchor TXs at configured intervals. - Treat repeat submissions with same idempotency-key as identical operations.

6.3 Overlay-overlay sync

Overlays implementing this profile should: - Implement GASP-compatible peer endpoints with subtree-rooted topic state. - Discover peers via a registry (initially out-of-band — service-list in peck-docs; eventually a registry token-state contract). - Publish their own state-roots to chain at configured cadence. - Cross-check peer roots and surface inconsistencies to operator monitoring.

6.4 Application layer

Apps publishing peck-social content should: - Choose the right channel per entity per §5 defaults (or document deviation). - Sign content via canonical AIP regardless of channel (signing rules are channel-uniform). - Submit to overlay with idempotency-key to allow safe retry. - Optionally publish their own topic state-roots if they run their own overlay.


7. Migration policy

This profile is additive to v1 and token-state. Existing v1-OP_RETURN content is unaffected; existing token-state UTXOs continue to operate per their own profile. Input-script-data is a new channel that becomes available to applications that adopt this profile — no historic data needs migration.

For overlays, supporting this profile means adding input-script parsing and overlay-sync endpoints. Existing chain-only-derived state remains valid; the new channels extend the data sources, not replace them.


8. Open questions

  • Teranode miner-policy verification. Pre-Teranode input-script size limits were typically lower than OP_RETURN. Post-Teranode (May 2026+) defaults are settling; this profile leaves size policy implementation-deferred until concrete miner-manifests are published.
  • Idempotency-key format. Is it the AIP-preimage-hash, a separate UUID, or both? AIP-hash gives natural deduplication of identical content; separate UUID gives client-controlled retry semantics. May be both (one or the other accepted).
  • Anchor frequency policy. "Every block" is conservative but expensive at scale. "Every 100 blocks" is cheaper but increases the staleness window for cross-overlay verification. May be parameterized per-topic.
  • Peer overlay discovery. Initially out-of-band (service registry). Long-term: a token-state registry contract where overlays mint a peer-listing token? Out of v1 scope, worth flagging.
  • Federation membership policy. Who can join the peck-social overlay federation? Open (anyone runs a peer)? Permissioned (issuer signs peer-onboarding)? Affects the game-theoretic-acceptance model. Recommended starting point: open, with operator-defined trust scores in client config.
  • State-root anchor as token vs OP_RETURN. Anchor TXs themselves could be token-state contracts (with version-monotonicity rules) or simple OP_RETURN. Token-form gives stronger ordering guarantees; OP_RETURN is cheaper.
  • Disagreement resolution. When two overlays publish conflicting state-roots for the same topic at the same block, what's the protocol for resolution? Merkle-tree-walk to find divergent leaves, then human-in-the-loop adjudication seems the realistic path; needs spec'ing.
  • Submission-routing fallback. If a client's preferred overlay is down, can it submit to a peer? Idempotency-keys + cross-overlay sync make this safe in principle; needs concrete client-side fallback policy.

9. Teranode integration — concrete bindings

The arrival of Teranode (GorillaPool live May 1, 2026) changes the integration story for this profile from "abstract overlay direction" to "concrete primitives we bind to". This section maps each architectural concept in the previous sections to the specific Teranode interfaces that implement it.

9.1 Direct Kafka consumption — replacing JungleBus dependency

Teranode exposes nine Kafka topics for streaming blockchain data, all serialized as Protocol Buffers. The relevant ones for peck-social-overlay implementations:

Topic Use
kafka_validatortxsConfig New transactions with full bytes (including input scripts). Primary feed for input-script-data parsing.
kafka_txmetaConfig Transaction metadata add/delete events with content blob (includes input outpoints, fees, etc.). Used to detect token-state UTXO transitions.
kafka_subtreesConfig New subtree notifications. Triggers per-topic subtree-update cadence in overlays.
kafka_blocksFinalConfig Finalized block announcements with header + subtree hashes + coinbase. Used for cross-overlay state-anchor verification.
kafka_invalidBlocksConfig / kafka_invalidSubtreesConfig Reorg / invalidation notifications. Overlays MUST honor these to roll back state.
kafka_rejectedTxConfig TXs that failed validation. Useful for debugging / monitoring.

peck-indexer-go's current architecture pulls from JungleBus (a third-party indexer-as-a-service relay). With Teranode generally available, peck-indexer-go SHOULD plan a migration path to consume Kafka topics directly:

  • Lower latency (one fewer hop)
  • No third-party dependency for the read-side
  • Topic-level filtering (subscribe only to what you care about)
  • Native binary protobuf instead of JSON-over-WebSocket

This is not a v1-blocker; JungleBus continues to work fine for OP_RETURN consumption in the current era. The point is that the migration path exists and aligns with the protocol direction.

9.2 Asset Service as the overlay-sync backbone

The Asset Service (HTTP default port 8090, Centrifuge real-time on 8000) exposes:

  • Transaction queries by hash (binary, hex, JSON; batch operations)
  • Block operations (by hash or height; headers; statistics)
  • UTXO queries
  • Subtree-management endpoints (subtree data, contained transactions)
  • FSM state inspection
  • Search across blocks / TXs / subtrees
  • Response signing via Ed25519 for integrity verification

This is the integration point overlays use for both first-party operations (looking up TX details, checking UTXO status) and for serving as a backbone for inter-overlay sync. peck-overlay implementations SHOULD expose a parallel-shape API for their own derived state — /v1/topic/{topic}/snapshot?block={N} mirrors Teranode's Asset Service shape but for overlay-topic-specific Merkle subtrees.

The /v1/topic/{topic}/subtree/{height} peer-overlay endpoint is conceptually a sibling of Teranode's Asset Service /subtree/{hash} endpoint. Both serve subtree-rooted, integrity-verifiable state slices.

9.3 Subtree-granularity state anchoring

Per §4 (state anchoring), overlays publish state-roots to chain at intervals. Pre-Teranode the natural cadence was per-block. With Teranode's subtree-validation, overlays MAY anchor at subtree-granularity, getting:

  • ~10-100× faster effective anchor cadence (depending on miner subtree-construction policy)
  • Finer-grained inter-overlay state-comparison
  • Earlier detection of state divergence between overlays

The trade-off: more anchor TXs = more on-chain cost. For high-activity topics (peck-social-overlay's tip+comment, paywall-receipts, anointings — all input-script-data) the faster cadence pays off. For slower-moving topics (peck-social-token's profiles and identity-claims) per-block anchor remains appropriate.

Implementations SHOULD make anchor cadence configurable per topic, defaulting to per-block but allowing subtree-granularity opt-in.

9.4 Centrifuge real-time push for live UX

Asset Service's WebSocket Centrifuge interface (default port 8000) supports real-time push of state changes to subscribed clients. peck-web and peck-ink frontends can subscribe to subtree-update or block-final events and update UI without polling.

For peck-overlay's own clients, the same Centrifuge pattern can be implemented at overlay-API-layer for topic-specific state-change push — "show me new replies on post X as they happen" becomes a Centrifuge subscription rather than a polling loop.

9.5 Subtree-as-finality-unit semantics

Teranode validates subtrees independently and in parallel before block assembly. This creates an intermediate state: validated-subtree-but-not-yet-blocked. peck-social-overlay implementations MAY surface this as a UX-relevant signal:

  • pending — TX seen but not yet in a validated subtree
  • subtree-validated — TX is in a Teranode-validated subtree (network has parallel-validated it; very high probability of next-block inclusion)
  • block-included — TX is in a Teranode-finalized block
  • confirmed — TX is in a checkpoint-validated block (hard finality)

For social UX where post-confirmation latency matters (a user wants to see "their post landed" as fast as possible), surfacing subtree-validated as the affirmative-state-show-the-post threshold trades a small probability of reorg for sub-second responsiveness. peck-web's current "broadcast → optimistic-show → confirm" pattern naturally extends to four states instead of three.

9.6 Resolved + still-open questions

From §8 (open questions), the Teranode availability resolves several:

  • ✓ "Are Kafka topics with full TX bytes available for third-party consumption?" — yes, documented and exported.
  • ✓ "Do we need to depend on JungleBus indefinitely?" — no, migration path exists.
  • ✓ "Is subtree-rooted overlay sync a fantasy or has the chain layer adopted it?" — the chain has adopted it; overlays can mirror.

Still open:

  • Standardization of cross-overlay subtree-format (Teranode subtrees are TX-ID-roots, not state-roots; overlay subtree convention needs separate spec).
  • Auth model for Asset Service public endpoints (some require auth, some don't; configurable per operator).
  • BSVA's shelved game-theoretic study — still relevant; would inform federation-membership and acceptance-threshold policies.
  • Cross-overlay disagreement resolution at subtree level (Merkle-tree-walk to find divergent leaves remains the realistic path).

10. Acknowledgments

This profile is shaped directly by:

  • The Deggen / John Calhoon / Brayden Langley X-thread (April 2026) explicitly calling out idempotency, GASP for overlay-node-sync, per-topic Merkle trees, subtree-based sync between overlays, and the "99.999+% peer acceptance" semantic. This profile is not a unilateral declaration; it documents conventions consistent with where the public BSV overlay-developer conversation is converging.
  • GorillaPool's transition to Teranode-native (May 1, 2026), which signaled the chain layer adopting subtree-based architecture and made it natural for overlay-sync to inherit the same shape one level up.
  • The shelf-stored BSVA study on game-theoretic dynamics of overlay-syncing referenced by Deggen — this profile leaves a concrete implementation hook for whatever that study recommends, should it surface.
  • BRC-22 overlay services and the SHIP/SLAP discovery mechanisms that make peer-finding viable.
  • Bitcoin Schema (rohenaz / b-open-io) for the B + MAP + AIP push order this profile inherits across all three channels.
  • The peck-broadcaster pattern (peck.to internal) for the Redis-based asynchronous-broadcast model that already prefigures idempotent submission semantics.
  • Craig Wright's input-script-as-data observation, however contextual — the technical observation that data in spendable transaction parts is more aligned with Bitcoin's design than burn-style OP_RETURN. This profile takes that observation seriously without endorsing the broader politics.

The errors are ours; the foundations are theirs.