← Pueblo

The Three-Ledger Methodology

An implementation methodology for industrial operational data substrates: append-only ledgers, methodology-versioned derivations, and attested exports.

1. Executive summary

This methodology defines an industrial operational data substrate built on three commitments. The first commitment is append-only ledgers: every observation, every derived value, and every administrative act is written once and never rewritten; corrections are new entries that reference the entries they correct. The second commitment is methodology-versioned derivations: derived values (allocations, accruals, reconciliations, KPI rollups) are computed under an explicitly versioned methodology specification, and a new version of that specification produces a parallel track of derivations alongside the old one — the historical track is never overwritten. The third commitment is attested exports: every export of operational or derived data carries a verification artifact that the recipient can validate independently of the platform that produced it.

The substrate runs in a two-tier topology. A field tier sits on the operating site and consists of an intelligent on-site software agent running on a dedicated hardware kit; that agent decodes site protocols, preserves provenance at the moment of capture, and continues operating during cloud disconnection. A cloud tier holds the ledgers, the cryptographic notary, the long-term archive, the exchange surface for downstream consumers, the identity registry, and the conflict-resolution machinery. The two tiers communicate over a transport-agnostic application protocol carried on whatever physical link the site provides.

The substrate keeps three ledgers, kept distinct. The operational ledger records what was observed: sensor readings and operational events that arrive from outside the system. The derivations ledger records what was concluded: values computed inside the system under a methodology specification. The audit ledger records what was done to the system itself: configuration changes, authorizations, methodology transitions, exports. The disciplines that govern those three ledgers are commutatively distinct, and conflating them — as most platforms do — is the root cause of the four pathologies described in section 2.

2. The problem

Industrial operational data platforms exhibit, with depressing regularity, four pathologies that this methodology is engineered to eliminate.

2.1 Immutable provenance, mutable methodology

Operators correctly insist that the raw observation record is sacred: no one rewrites a sensor's voltage reading after the fact. But the derived values — allocations across commingled streams, accruals across reporting periods, KPIs that drive commercial settlement — are computed under business rules that change. When the rules change, most platforms either silently recompute history under the new rules (destroying the prior interpretation) or freeze history under the old rules and apply the new rules going forward (creating a discontinuity at the cutover that no one can audit). Neither is acceptable. Provenance and methodology are both first-class, and both require versioning, but the disciplines differ: observations are versioned by arrival (you cannot un-observe what arrived), while methodologies are versioned by registration (you can, and routinely will, change how you interpret what arrived).

2.2 Inadequate multi-tenant compliance isolation

Industrial customers operate under disjoint regulatory regimes. A national hydrocarbons regulator, a state environmental authority, an offshore safety body, and a private commercial counterparty have different evidence requirements and different egress constraints. A platform that treats all tenants as rows in a shared database — discriminated only by tenant_id — cannot satisfy a customer whose contract or regulator requires a dedicated database, and cannot satisfy a customer whose regulator requires that data never leaves a specific physical jurisdiction. Compliance isolation is not a feature flag; it is a deployment topology, and the methodology must support multiple tiers of it from day one.

2.3 Silent reconciliation of conflicting readings

In every real industrial dataset, two instruments occasionally observe the same physical phenomenon and disagree. A memory gauge and a surface readout report different bottomhole pressures. An MWD pulse and a wireline log report different formation depths. A custody-transfer meter and a tank gauge report different volumes. The engineering temptation is to average the two readings, prefer the more "trusted" instrument, or quietly drop the outlier as a transient. Each of those moves silently erases a finding — the disagreement itself — that is operationally consequential and, in regulated contexts, legally consequential. A correct methodology records both readings, links them, flags the disagreement, and treats the resolution as a separate auditable event.

2.4 Transient-architecture vocabularies

Most platforms accumulate vocabulary as they grow. Components are named for the technology in vogue when they were written ("the kafka layer", "the snowflake mart", "the redis hot store"), or for the team that owned them ("the platform-eng service"), or for the bug they were originally introduced to work around. This vocabulary is decorative — it carries no architectural commitment — and so commitments leak away. Engineers cannot remember which component was supposed to be append-only, which was supposed to be the source of truth, which was supposed to require an authorized signature to write to. A coherent vocabulary, drawn from a single source domain whose institutions already encode commitments analogous to the architecture's, is itself a form of compliance enforcement. Section 11 returns to this.

3. The methodology — three commitments

The methodology rests on three commitments. They are mutually reinforcing; weakening any one of them weakens the other two.

3.1 Commitment 1: append-only ledgers

Every fact-bearing record in the substrate — observations, derivations, and audit entries alike — is written exactly once. There is no UPDATE and no DELETE in the data model of the ledgers. Corrections are expressed as new entries that reference the entry being corrected; deletions are expressed as new entries that mark the prior entry as superseded. The architecture enforces this at the schema layer (no update/delete grants on ledger tables), at the application layer (the write path emits only inserts), and at the audit layer (any administrative override of these constraints is itself an audit entry, scoped and expiring).

3.2 Commitment 2: methodology-versioned derivations

Every derived value is computed under a methodology specification that is itself a versioned artifact in the substrate. A new version of a methodology specification does not retroactively rewrite past derivations; it produces a parallel track of derivations alongside the old one, beginning from a registration timestamp. Consumers of the derivations ledger select the methodology version explicitly. The architecture enforces this by binding every derivation record to the methodology specification version under which it was produced, and by treating the registration of a new version as a state transition recorded in the audit ledger (section 8).

3.3 Commitment 3: attested exports

Every export of data from the substrate — to a regulator, to a commercial counterparty, to a downstream analytics system, to the customer's own data lake — carries a verification artifact that the recipient can validate without trusting the platform. The verification artifact contains the cryptographic seal chain over the exported records, the methodology specification version under which any derived records were computed, the identity of the authorizing principal, and (in higher modes) a Merkle anchor on a public chain that proves the export's contents were sealed before some external timestamp. The architecture enforces this by routing every export through the cryptographic notary; an export emitted on any other path is a defect.

4. Three ledgers, kept distinct

The substrate maintains three ledgers, and they are not the same ledger viewed three ways. They are kept distinct because the disciplines that govern them are commutatively distinct.

4.1 The operational ledger

The operational ledger records what was observed. Its records arrive from outside the system: sensors, controllers, gauges, instruments, meters, manual entries, third-party data feeds. The operational ledger is immutable on arrival; the substrate has no authority to alter what was observed. Corrections to observation records (a recalibrated sensor, a re-keyed manual entry, a vendor's late-correcting data feed) are themselves new observation records that reference the prior record they correct. The operational ledger is the substrate's evidentiary floor: every derivation, every export, every adjudication ultimately resolves back to entries in this ledger.

4.2 The derivations ledger

The derivations ledger records what was concluded. Its records are born inside the system; they did not arrive, they were computed. Each record is computed under a versioned methodology specification, against one or more observation records, and is sealed at a period close. When the methodology specification is revised, the derivations under the new version are written as a parallel track alongside the prior version's derivations; the prior records are not rewritten and are not deleted. The derivations ledger is therefore not a single time series of derived values but a forest of parallel time series, one per methodology version, each fully recoverable.

4.3 The audit ledger

The audit ledger records what was done to the system itself. Configuration changes, role grants and revocations, methodology specification registrations and deactivations, period closes, exports, identity-credential issuance and revocation, egress policy edits, notary mode transitions, signing-key rotations, and override authorizations all appear here. The audit ledger is the place from which administrative state is derived — there is no separate "current configuration" table that drifts from the audit log; the configuration is what the audit log says it is at any given timestamp. Section 8 develops this point at length for methodology versioning.

4.4 Why three, not one

A single ledger, with a record_kind column, is the naive answer. It fails for three reasons. First, the retention policies differ: observations may need to be kept indefinitely, derivations may need to be kept for the duration of a regulatory regime, and audit entries may need to be kept for the duration of any liability arising from administrative acts. Second, the isolation requirements differ: observations may need to leave the substrate in bulk to a customer's data lake, derivations may be exported only to a regulator or counterparty, and audit entries may never leave the substrate at all. Third, the write paths differ: observations are written by the field tier or by ingestion adapters, derivations are written by the derivation engine after a period close, and audit entries are written by the platform itself in response to administrative acts. Conflating the three obscures these differences and produces, eventually, all four pathologies of section 2.

5. Two-tier topology

The substrate is deployed as two cooperating tiers.

5.1 The field tier

The field tier sits on the operating site. Its core is an intelligent on-site software agent, running on a dedicated hardware kit specified by RCI for the deployment. The agent decodes site-specific protocols (Modbus, OPC-UA, vendor-proprietary serial framings, ASCII-CSV files dropped by site systems, WITSML feeds, and so on) into the substrate's canonical observation record envelope. The agent stamps each observation with provenance at the moment of capture: the source-asserted timestamp, the receive timestamp at the agent, the original units of measurement, the adapter and adapter version that decoded the record, the ingest node identity, and a content hash. The agent operates without depending on cloud-side health probes; if the cloud tier is unreachable, the agent continues to capture, persist locally, and forward when connectivity returns. The field tier is dedicated hardware, not a container deployed on a customer SCADA host; the hardware kit isolates the substrate's evidentiary chain from anything else running at the site.

5.2 The cloud tier

The cloud tier holds the three ledgers, the cryptographic notary, the long-term archive, the exchange surface that exposes data to downstream consumers under contract, the identity registry that issues and revokes credentials for principals (humans and machines), the fleet-management agent that supervises field-tier deployments, and the conflict-resolution machinery (section 9). The cloud tier may be deployed multi-tenant or single-tenant per customer (section 10). The cloud tier is where derivations are computed, period closes are sealed, exports are issued, and audit entries are accumulated.

5.3 The wire transport

The transport between field and cloud is transport-agnostic at the application layer. The application protocol carries observation records and control messages; it neither knows nor cares whether the underlying physical link is Starlink, the customer's WAN, cellular, microwave, or a leased line. Transport selection is a deployment-time decision driven by the site's connectivity options. Constraints on what may flow over the link are enforced at the firewall layer with signed policy: the firewall configuration is itself a record in the audit ledger, signed by an authorizing principal, and changes to that configuration are administrative acts. Transport encryption is mandatory; transport selection is not architecturally privileged.

6. The atomic units

Three record types form the substrate's atomic vocabulary.

6.1 The observation record

An observation record is one sensor reading or one operational event. Its envelope carries dual timestamps (the source-asserted timestamp from the originating instrument or system, and the receive timestamp at the field-tier agent), the original units of measurement (no destructive conversion at write — if the sensor reports kPa, the record stores kPa, and any conversion to psi is a derivation), and full provenance (the adapter that decoded the record, that adapter's version, the ingest node identity, and a content hash over the canonical serialization).

{
  "record_id": "01HZ9X3K4M2A...",
  "record_kind": "observation",
  "tenant_id": "tenant-acme",
  "site_id": "site-north-04",
  "stream_id": "wellhead-pressure-sensor-7",
  "timestamp_source": "2026-05-04T13:42:17.250Z",
  "timestamp_receive": "2026-05-04T13:42:17.418Z",
  "value": 18437.2,
  "uom_original": "kPa",
  "provenance": {
    "adapter": "modbus-tcp-decoder",
    "adapter_version": "2.4.1",
    "ingest_node": "field-agent-north-04-a",
    "content_hash": "sha256:9b1c4f..."
  },
  "seal": {
    "mode": "A",
    "signature": "ed25519:7f3e..."
  }
}

6.2 The derivation record

A derivation record is a value computed from one or more observation records (and, recursively, from other derivation records) under a methodology specification version. Its envelope carries the methodology specification version under which it was computed, the set of records it derives from (referenced by record id, not embedded), the period close event at which it was sealed, and the cryptographic seal that links it into the chain. A derivation record is never produced ad hoc; it is produced as part of a period close, and the period close is itself an event recorded in the audit ledger.

6.3 The audit entry

An audit entry is a record of an act upon the system itself. Its envelope carries the actor (the principal that performed the act), the act (configuration change, role grant, methodology registration, export, and so on), the target (the configuration item, role, methodology lineage, export specification), the rationale (a human-readable justification, required), and the seal. Default-deny coverage applies: every event in the substrate is an audit entry by default, and any exclusion from audit coverage is itself an audit entry that registers the exclusion (section 10).

7. Cryptographic notarization

A per-record cryptographic signature links the substrate's records into a tamper-evident chain. The signature scheme is implementation-specific (the substrate uses Ed25519 over a canonical serialization at the time of writing); what matters methodologically is that every record carries a signature whose verification depends on signatures over prior records, so that any retroactive alteration of a record invalidates the chain from that point forward.

The substrate offers two notarization modes, configurable per tenant:

Mode A — internal certificate. Records are signed under a key held by the substrate, anchored to an internal certificate authority. Mode A is sufficient for SOC 2 baseline obligations and is the floor of the substrate's notarization commitment. A tenant on Mode A obtains tamper-evidence relative to the substrate operator: any alteration of a sealed record by the operator after the fact is detectable, and any alteration by an external attacker is detectable, but a recipient who does not trust the operator cannot, on Mode A alone, prove the records were sealed at any specific historical moment.

Mode B — Mode A plus periodic Merkle anchoring to a public chain. In Mode B, the substrate periodically computes a Merkle root over the records sealed since the prior anchor and publishes that root to a public chain (the choice of which public chain is a deployment parameter). Mode B grants the verify-without-trusting-the-platform property: a recipient holding an exported record and the corresponding Merkle inclusion proof can verify, against the public chain, that the record's contents existed and were sealed before the public-chain timestamp of the anchoring transaction. Mode B is the appropriate setting for tenants whose downstream consumers (regulators, commercial counterparties) require independent verification.

Each export from the substrate carries a verification artifact: a self-contained bundle of the exported records, the seal chain over those records, the methodology specification versions under which any derivations were computed, the authorizing principal's identity and signature on the export, and (in Mode B) the Merkle inclusion proofs and the public-chain transaction identifiers required to verify the anchors. The verification artifact is what the recipient validates; the substrate's continued availability is not required for validation.

8. Methodology versioning — the five rules

The hard part of operating a derivations ledger is not writing the first version of the methodology specification. It is writing the second. The four pathologies of section 2 cluster around precisely this transition. The substrate enforces five rules.

8.1 The five rules

Rule 1: active is a derived attribute, not a stored flag. Whether a methodology specification version is currently active is computed by walking the audit ledger for that lineage's registration and deactivation entries; it is not stored on the methodology specification record itself. This rule is load-bearing because it makes the active state recoverable from history: any disagreement about whether a version was active at some past timestamp is resolved by replaying the audit ledger to that timestamp, not by trusting whatever flag happens to be set on a row right now.

Rule 2: at most one version of a methodology lineage may be active at any time. A "lineage" is a chain of methodology specification versions that succeed one another (v1 → v2 → v3...). Two versions of the same lineage cannot be simultaneously active; the substrate would not know which one to use to compute a new derivation. (Different lineages are independent and may all be active simultaneously.)

Rule 3: supersession is atomic. When v2 supersedes v1, the audit ledger records a single compound entry that both registers-and-activates v2 and deactivates v1. Never two entries. A two-entry implementation has a window — even a millisecond — during which either both versions are active (violating Rule 2) or neither is active (the lineage has no current version, and a derivation request in that window has no methodology under which to compute). The compound entry is one act, audit-ledger-atomic.

Rule 4: soft-delete is a derived predicate, not a stored state. A methodology specification version is "soft-deleted" iff every version in its lineage is inactive. There is no is_deleted column. A lineage with v1 superseded by v2, both inactive, is soft-deleted; a lineage where v3 is currently active is not soft-deleted, even if v1 and v2 are inactive. This rule makes soft-delete reversible by future registration without any retroactive flag-flipping.

Rule 5: rollback is re-registration. Superseded versions cannot be reactivated. If after running v2 for a quarter the operator decides v1's rules were preferable, the operator does not reactivate v1. The operator registers v3, whose computation rules mirror v1's, atomically deactivating v2 (Rule 3). v3 is a new version with a new identity and a new registration timestamp; derivations produced under v3 are distinct from those produced under v1, even though the rules are identical. This rule preserves the monotonic forward direction of the lineage and avoids the ambiguity of "is this derivation under the old v1 or the re-activated v1?"

8.2 The state matrix

Every methodology specification version sits at the intersection of two axes: whether it is currently active (computed per Rule 1), and whether it has a current successor in its lineage (i.e., a successor version that is itself either active or has its own current successor). The matrix has four cells; the rules forbid one of them.

StateActiveInactive
Current (no successor or successor is current)active, currentinactive, no-current-successor (never had a successor, or all successors are themselves inactive — i.e., the lineage is dormant but not soft-deleted in the technical sense)
Superseded (a current successor exists)forbidden (violates Rule 2)inactive, superseded

The forbidden cell — (active, superseded) — is the cell the rules exist to prevent. A version that is both active and superseded means the lineage has two active versions (this one, and the current successor), which is precisely Rule 2's violation. The substrate's write path checks this invariant on every methodology-related audit entry; any transition that would produce a forbidden-cell state is rejected at write time, before the audit entry is sealed.

8.3 Why these rules are load-bearing

These five rules are not a coding convention. They are the substrate's only defense against the first pathology of section 2 (immutable provenance, mutable methodology). Without them, the derivations ledger drifts: rows get flagged active and inactive in non-atomic sequences, soft-delete columns desynchronize from registration history, rollbacks resurrect old versions and produce derivations whose lineage cannot be reconstructed, and the question "which methodology produced this derivation?" becomes unanswerable. With them, the derivations ledger is a forest of parallel tracks, every track fully recoverable, every transition between tracks recorded as a single atomic act in the audit ledger.

9. Two-witness disagreements

When two instruments observe the same physical phenomenon and disagree, the disagreement is itself a finding of operational consequence. The methodology refuses the convenience of resolving such disagreements silently.

9.1 The rule

Both readings are recorded in the operational ledger. A link record is written that names both observations and flags them as a two-witness disagreement on a specific phenomenon (the phenomenon is named explicitly: bottomhole pressure at depth X, formation top of unit Y, custody-transfer volume for batch Z). Neither observation is altered, suppressed, or annotated as "wrong"; the link record is the structure that says "these two records purport to measure the same thing and they don't agree."

9.2 Adjudication

Adjudication is a separate, recorded event, written to the audit ledger. The adjudicator may be automated (a per-tenant rule set: "prefer the memory gauge over the surface readout when the spread exceeds 2%", "always escalate disagreements involving custody-transfer meters") or human (a principal with the Approver role for that tenant, escalated from automated rules or invoked directly). The adjudicator's identity, the rule set version (if automated) or the human principal (if manual), the rationale, and the resolution are all sealed into the audit entry.

9.3 The resolution as a third record

The adjudication's outcome is a third record — a derivation record under a methodology specification version that handles two-witness disagreements — that references both original observations and the audit entry recording the adjudication. The ruling never replaces the disagreeing observations. Downstream consumers reading the derivations ledger see the resolved value; downstream consumers reading the operational ledger see both original observations and the link record; downstream consumers reading the audit ledger see who adjudicated, when, and why. All three views are simultaneously and permanently recoverable.

10. Multi-tenant compliance

The substrate supports three isolation tiers and a default-deny audit posture.

10.1 The three isolation tiers

TierTopologyTypical tenant
Shared with row-level securitySingle database, tenant-discriminated rows, RLS-enforced isolationSmall tenants with no specific compliance isolation requirement
Dedicated database per tenantOne physical database per tenant on shared infrastructureWhitelabel deployments and large operators with contractual or regulatory isolation requirements
Single-tenant on-prem deploymentFull substrate stack deployed inside the customer's evidence boundary, with egress restrictionsCustomers whose regulators or contracts require physical-jurisdiction or air-gap evidence boundaries

The tier is a deployment-time decision and is, itself, recorded in the audit ledger of the deployment that hosts the tenant. Migration between tiers (a small tenant on the shared tier outgrows it and requires the dedicated tier) is a substantial operational event, planned and audited; the methodology does not pretend it is free.

10.2 Default-deny audit coverage

Every event in the substrate is an audit entry by default. Exclusions exist — high-cardinality routine reads (a dashboard polling the operational ledger every five seconds), debug traces emitted during incident response, health-check pings between cloud-tier services — because auditing them all would saturate the audit ledger and obscure the entries that matter. But each exclusion is itself an audit entry, authored by a principal with the Approver role, scoped (which event types, on which tenants, on which subsystems), expiring (an explicit end timestamp, not "until further notice"), and rationale-bearing (a human-readable justification recorded on the entry).

10.3 The global must-not-exclude list

A small set of event types may never be excluded, on any tenant, by any principal, under any rationale. This list is hard-coded in the substrate and applies globally:

  • Methodology specification registrations, deactivations, and lineage transitions
  • Period close events and the seals they produce
  • Exports of any kind, to any destination
  • Identity-credential lifecycle events (issuance, rotation, revocation)
  • Egress policy changes
  • Notary mode transitions (Mode A ↔ Mode B)
  • Signing-key rotations on any signing key in the substrate

An attempt to register an exclusion that intersects the must-not-exclude list is rejected at write time, before the exclusion's audit entry would be sealed. The list is a structural property of the substrate, not a configuration value.

11. The metaphor-vocabulary discipline

The fourth pathology of section 2 — transient-architecture vocabularies — is a pathology of naming. It deserves a methodological response of its own.

11.1 What goes wrong without it

When a system's vocabulary is decorative, commitments leak away. A component named for the technology behind it ("the message bus", "the columnar store") tells the next engineer nothing about what guarantees the component is supposed to provide; renaming the technology without changing the guarantees, or vice versa, is silent. A component named for the team that built it tells the next engineer even less, and the team itself may no longer exist. A component named for the bug that prompted its introduction is the worst case: the name is meaningful only to the engineer who remembers the bug, and that engineer leaves.

11.2 What a coherent metaphor does

When the names are drawn from a single coherent source domain — a metaphor whose institutions match the architectural commitments the substrate is making — two things happen. First, the names encode commitments: the institution being borrowed already had a job, already had constraints, already had relationships to other institutions, and importing the name imports those structures as a constraint on the implementation. Second, the names generate further names organically: as the architecture grows, the next component to be added has a natural place in the source-domain institutional landscape, and its name emerges from that landscape rather than from whatever technology happens to implement it.

11.3 What the methodology prescribes — and what it does not

The canonical methodology prescribes that a metaphor be chosen, that it be coherent (drawn from a single institutional source domain), and that its institutions match the architectural commitments. The methodology does not prescribe which metaphor: pick one whose institutions match the commitments your substrate makes. A platform whose central commitment is custodial preservation might draw its vocabulary from archival institutions; a platform whose central commitment is adversarial verification might draw from judicial institutions; a platform built around append-only ledgering and notarization might draw from a settlement system that already operated with clerks, registries, and seals. The methodology is metaphor-agnostic; the choice is part of instantiating it.

11.4 The IETF draft

The engineering theory of the metaphor-vocabulary discipline — the rules for choosing a source domain, the constraints on coherence, the cataloging of borrowed institutions, and the working examples — is the subject of the IETF draft draft-rodriguez-grana-metaphor-vocabularies (Rodriguez & Graña). Practitioners building substrates under this methodology should treat that draft as the reference text for the vocabulary discipline. One worked instantiation of the methodology — including a chosen metaphor (rural-pueblo institutions of nineteenth-century interior Argentina) and the full vocabulary that metaphor produces — is documented separately as the El Mundo white paper.

12. Implementation notes and engagement

What is hard, what is easy, and where RCI is in implementing the methodology.

12.1 What is hard

Methodology versioning state model. The five rules of section 8 are simple to state and surprisingly hard to implement correctly. The hardest part is Rule 3 (atomic supersession): most data stores do not provide the atomicity guarantee the rule requires, and naïve implementations on top of those stores have the millisecond-window vulnerabilities the rule exists to prevent. The implementation requires care.

Multi-tenant isolation tiers. The shared tier with row-level security is the cheapest to operate but the most error-prone; the dedicated-database tier is straightforward to operate but requires careful per-tenant deployment automation; the single-tenant on-prem tier is operationally demanding because every site is its own deployment. Choosing the right tier per tenant, and operating all three tiers concurrently, is a substantial engineering investment.

Default-deny audit with global must-not-exclude. Most logging frameworks default to "log nothing unless asked"; this methodology requires the inverse, with structured exclusions. Retrofitting default-deny audit onto an existing platform is harder than building it in from the start.

Cryptographic notary key custody. Mode A requires only a competent internal certificate practice; Mode B requires that practice plus an operational discipline around periodic anchoring transactions on a public chain — including the cost, the reliability, and the key custody for the public-chain account that signs the anchoring transactions. Key compromise (or loss) at the notary is the substrate's worst-case operational event.

Conflict-resolution adjudicator selection. Per-tenant rule sets for automated adjudication, escalation paths to human Approvers, and the operational discipline that ensures Approvers are responsive — all of this must exist before the first two-witness disagreement arrives. Building the machinery after the disagreement arrives is too late.

12.2 What is easy

The field-tier agent, if the operator already has an ASCII-CSV ingest pipeline. The agent's protocol decoders are modular; adding a new decoder for a site-specific format is routine engineering. Operators who already standardize on ASCII-CSV at the site can adopt the field tier with minimal site-side change.

The long-term archive, once the retention model is decided. The archive itself is technologically conventional (object storage with versioning, lifecycle rules, and a catalog). The hard part is deciding the retention model — for how long, under what regulatory regime, with what access controls — and the methodology informs that decision but does not make it.

12.3 Where RCI is

RCI's Data Platform for an Oil & Gas Operator is a multi-component well-construction information platform spanning data masters catalogs, field reporting data (from both a legacy 20+-year-old system and a new reporting system), AFE (Authorization for Expenditure) modeling, a Well File lifecycle system, and a drilling-and-completions analytics system. Its seminal system became officially operational in May 2026 for all field types of operations in well construction, and the platform continues to expand through 2026 and 2027. That platform was the first implementation of the ledgering approach described in sections 3 through 10; the methodology presented here is its formalization and re-usable expression.

Two design partnerships are in motion. The first is the continued expansion of the Oil & Gas Data Platform — adding capabilities across the components above and progressively incorporating the methodology's formalized constructs as they reach production readiness. The second is a methodology-adoption partnership with a geothermal utility: a shallow-well, water-well-permitted, temperature-equalization project that adopts the methodology's record-keeping concepts for its own planning-and-reporting platform — adopting the substrate's discipline without deploying the field tier described in section 5. The second partnership validates the methodology's transferability across industrial domains and regulatory regimes; the first validates its operational soundness at production scale.

12.4 How to engage RCI

RCI works with industrial-data customers as a design partner, not as a vendor of a finished product. The engagement starts with a methodology fit assessment (does this substrate match the customer's compliance regime, evidence requirements, and operating topology?) and proceeds through a phased implementation. Customers who wish to engage should write to pueblo@roderickc.com.

Companion document: the El Mundo white paper — one worked instantiation of this methodology.