Internet-Draft PoP Appraisal February 2026
Condrey Expires 20 August 2026 [Page]
Workgroup:
Remote ATtestation procedureS
Internet-Draft:
draft-condrey-rats-pop-appraisal-03
Published:
Intended Status:
Experimental
Expires:
Author:
D. Condrey
WritersLogic

Proof of Process (PoP): Forensic Appraisal and Security Model

Abstract

This document specifies the forensic appraisal methodology and quantitative security model for the Proof of Process (PoP) framework. It defines how Verifiers evaluate behavioral entropy, perform liveness detection, and calculate forgery cost bounds. Additionally, it establishes the taxonomy for Absence Proofs and the Tool Receipt protocol for AI attribution within the linear human authoring process.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 20 August 2026.

Table of Contents

1. Introduction

The value of Proof of Process (PoP) evidence lies in the Verifier's ability to distinguish biological effort from algorithmic simulation. While traditional RATS [RFC9334] appraisals verify system state, PoP appraisal verifies a continuous physical process. This document provides the normative framework for forensic appraisal, defining the logic required to generate a Writers Authenticity Report (WAR).

2. Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

3. Step-by-Step Verification Procedure

A Verifier MUST perform the following procedure to appraise a PoP Evidence Packet:

  1. Chain Integrity: Verify the SHA-256 hash link between all checkpoints. Any break invalidates the subsequent chain.
  2. Temporal Order: For each process-proof, recompute Argon2id from the declared seed to obtain state_0, then verify sampled Merkle proofs against the committed root. Ensure claimed duration is consistent with SWF difficulty parameters.
  3. Entropy Threshold: Verify that the entropy-estimate field in each jitter-binding structure meets or exceeds 128 bits. Low-entropy segments (below threshold) MUST be flagged as "Non-Biological."
  4. Entanglement: Verify the HMAC signature (entangled-mac) over the combined document, jitter, and physical state.
  5. State Matching: Reconstruct the document from the semantic event transcript and verify the final hash matches the document-ref.

4. Forensic Assessment Mechanisms

The appraisal logic is designed to detect "Synthetic Authoring" -- content generated by AI and subsequently "back-filled" with timing and hardware attestation.

SNR (Signal-to-Noise Ratio) Analysis:
Verifiers MUST evaluate the stochasticity of jitter. High-precision robotic injection often exhibits "perfect" noise or periodic patterns. Human motor signals exhibit 1/f fractal noise signatures [Monrose2000] that are computationally expensive to simulate realistically.
Cognitive Load Correlation (CLC):
To defeat high-fidelity AI jitter models, Verifiers MUST correlate timing patterns with semantic complexity. Human authors exhibit increased inter-keystroke intervals (IKI) and pause frequency during the composition of high-entropy segments (e.g., technical definitions) compared to low-entropy segments (e.g., common connectors). A simulation that maintains a constant biological signature regardless of content complexity MUST be flagged as a "Semantic Mismatch."
Mechanical Turk Detection:
Analyzes intra-checkpoint correlation (C_intra) to detect "robotic pacing" -- where an automated system maintains a machine-clocked editing rate. Statistical correlation between pause duration and subsequent edit complexity is a mandatory check.
Error Topology Analysis:
Human authors exhibit characteristic patterns: Localized corrections near recent insertions, fractal self-similarity in revision patterns across different time scales, and a specific ratio of deletions to new content consistent with human cognitive processing.
QR Presence Challenge (OOB-PC):
To bridge the digital-physical gap, the Attester MAY issue an Out-of-Band Presence Challenge. A non-deterministic QR code is displayed on the primary screen, which the author MUST scan with a registered secondary device (e.g., smartphone). The secondary device signs the challenge with its own hardware-bound key. Verifiers MUST verify the wall-clock alignment of this OOB signature within the checkpoint chain.

5. Forgery Cost Bounds (Quantified Security)

Forgery cost bounds provide a Verifier with a lower bound on the computational resources required to forge an Evidence Packet. The cost (C_total) is computed as:

  C_total = C_swf + C_entropy + C_hardware

Verifiers MUST include these estimates in the WAR to allow Relying Parties to set trust thresholds based on objective economic risk.

6. Absence Proofs: Negative Evidence Taxonomy

Absence proofs assert that certain events did NOT occur during the monitored session. They are divided into categories based on verifiability:

Type 1: Computationally-Bound Claims
Verifiable from the Evidence Packet alone (e.g., "Max single delta size < 500 bytes" or "No checkpoint timestamps out of order").
Type 2: Monitoring-Dependent Claims
Require trust in the AE's event monitoring (e.g., "No paste from unauthorized AI tool" or "No clipboard activity detected"). Trust in these claims MUST be weighted by the declared Attestation Tier (T1-T4).
Type 3: Environmental Claims
Assertions about the execution environment (e.g., "No debugger attached" or "Hardware temperature remained within stable physical bounds").

7. Tool Receipt Protocol (AI Attribution)

When external tools (LLMs) contribute content, the framework enables a "compositional provenance" model:

  1. Receipt Signing: The Tool signs a "Receipt" containing its tool_id, an output_commit (SHA-256 hash of generated text), and an optional input_ref (SHA-256 hash of the prompt).
  2. Binding: The human Attester records a PASTE event in the transcript referencing the Tool Receipt's output_commit.
  3. Countersigning: The Attester binds the Receipt into the next human-driven checkpoint, anchoring the automated work into the linear human effort.

Verifiers appraise the ratio of human-to-machine effort based on these receipts and the intervening SWF-proved intervals.

NOTE: The complete CDDL wire format for Tool Receipts, including signature algorithms and binding mechanisms, will be specified in a future revision of this document or a separate companion specification. Implementations SHOULD treat this section as informative guidance until normative wire formats are defined.

8. Adversary Model

This document inherits the adversary model defined in Section 4 of [PoP-Protocol]. The appraisal procedures defined herein assume the adversarial Attester capabilities and constraints specified there.

The primary threat is an adversarial Attester -- an author who controls the Attesting Environment and seeks to generate Evidence for content they did not authentically author. The forensic assessment mechanisms in Section 4 are specifically designed to detect the Retype Attack and other forgery vectors described in the protocol specification's threat model.

9. Privacy and Inclusivity

9.1. Privacy Considerations and Evidence Quantization

High-resolution behavioral data poses a stylometric de-anonymization risk [Goodman2007]. Implementations SHOULD support Evidence Quantization, reducing timing resolution to a level that maintains forensic confidence (detecting robots) while breaking unique author fingerprints.

9.2. Accessibility and Assistive Modes

Verifiers MUST NOT automatically reject evidence based solely on atypical timing patterns. Implementations MUST support "Assistive Modes" that adjust SNR and CLC thresholds for authors with motor disabilities or those using assistive technologies (eye-tracking, dictation).

To signal assistive mode usage, the Attester SHOULD include an assistive-mode indicator in the profile-declaration structure of the Evidence Packet. When this indicator is present, Verifiers MUST apply adjusted thresholds as follows:

  • SNR threshold: Reduced by 50% (accepting higher periodicity in motor signals)
  • CLC correlation threshold: r > 0.1 (reduced from r > 0.2)
  • Error topology: Waived for dictation modes; adjusted for eye-tracking

The WAR MUST indicate when assistive mode thresholds were applied. The specific CDDL extension for assistive-mode signaling will be defined in a future revision of [PoP-Protocol].

10. IANA Considerations

This document has no IANA actions. All IANA registrations for the PoP framework are defined in [PoP-Protocol].

11. Security Considerations

This document defines forensic appraisal procedures that inherit and extend the security model from [PoP-Protocol]. The broader RATS security considerations [Sardar-RATS] also apply. Implementers MUST consider the following security aspects:

11.1. Entropy Manipulation Attacks

An adversary may attempt to inject synthetic jitter patterns that satisfy entropy thresholds while lacking biological origin. Verifiers MUST employ multi-dimensional analysis (SNR, CLC, Error Topology) rather than relying on single metrics. The correlation between semantic content complexity and timing variation provides defense-in-depth against high-fidelity simulation.

11.2. Verifier Trust Model

The forensic assessments defined in this document produce probabilistic confidence scores, not binary determinations. Relying Parties MUST understand that forgery cost bounds represent economic estimates, not cryptographic guarantees. Trust decisions SHOULD incorporate the declared Attestation Tier (T1-T4) and the specific absence proof types claimed.

11.3. Stylometric De-anonymization

High-resolution behavioral data (keystroke timing, pause patterns) can enable author identification even when document content is not disclosed. Implementations SHOULD support Evidence Quantization to reduce timing resolution while maintaining forensic utility. The trade-off between forensic confidence and privacy MUST be documented for Relying Parties.

11.4. Assistive Mode Abuse

Adversaries may falsely claim assistive technology usage to bypass behavioral entropy checks. Verifiers SHOULD require consistent assistive mode declarations across sessions and MAY request additional out-of-band verification for mode changes. The WAR MUST clearly indicate when assistive modes were active.

12. References

12.1. Normative References

[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.
[RFC9334]
Birkholz, H., Thaler, D., Richardson, M., Smith, N., and W. Pan, "Remote ATtestation procedureS (RATS) Architecture", RFC 9334, DOI 10.17487/RFC9334, , <https://www.rfc-editor.org/info/rfc9334>.

12.2. Informative References

[Goodman2007]
Goodman, A. and V. Zabala, "Using Stylometry for Biometric Keystroke Dynamics", , <https://doi.org/10.1007/978-3-540-77343-6_14>.
[Monrose2000]
Monrose, F. and A. Rubin, "Keystroke dynamics as a biometric for authentication", , <https://doi.org/10.1145/351427.351438>.
[PoP-Protocol]
Condrey, D., "Proof of Process (PoP): Architecture and Evidence Format", Work in Progress, Internet-Draft, draft-condrey-rats-pop-protocol-04, , <https://datatracker.ietf.org/doc/html/draft-condrey-rats-pop-protocol-04>.
[Sardar-RATS]
Sardar, M.U., "Security Considerations for Remote ATtestation procedureS (RATS)", Work in Progress, Internet-Draft, draft-sardar-rats-sec-cons-02, , <https://datatracker.ietf.org/doc/html/draft-sardar-rats-sec-cons-02>.

Appendix A: Verification Constraint Summary

The following constraints MUST be verified by conforming Verifiers:

A.1 Structural Integrity

  1. Chain Integrity: SHA-256 hash chain is unbroken from genesis to final checkpoint.
  2. Temporal Monotonicity: All checkpoint timestamps strictly exceed their predecessors.
  3. SWF Continuity: Recompute Argon2id from seed; verify sampled Merkle proofs.
  4. Content Binding: Final document hash matches document-ref in Evidence Packet.

A.2 Behavioral Analysis (ENHANCED/MAXIMUM profiles)

  1. Entropy Threshold: jitter-binding.entropy-estimate >= 128 bits per segment.
  2. SNR Analysis: Jitter exhibits 1/f fractal noise, not periodic or "perfect" patterns.
  3. CLC Correlation: Semantic complexity correlates with timing (r > 0.2, or r > 0.1 for assistive mode).
  4. Error Topology: Correction patterns consistent with human cognitive processing.
  5. Mechanical Turk Detection: No robotic pacing (machine-clocked editing rate).

A.3 Absence Proof Validation

  1. Type 1 Claims: Verify computationally from Evidence Packet (delta sizes, timestamp ordering).
  2. Type 2 Claims: Weight by Attestation Tier (T1-T4).
  3. Type 3 Claims: Evaluate environmental assertions against physical-state markers.

A.4 Tool Receipt Validation (when present)

  1. Verify Tool signature over Receipt.
  2. Verify PASTE event references correct output_commit.
  3. Calculate human-to-machine effort ratio from SWF-proved intervals.

Author's Address

David Condrey
WritersLogic Inc
San Diego, California,
United States