Internet-Draft PoP Appraisal February 2026
Condrey Expires 18 August 2026 [Page]
Workgroup:
Remote ATtestation procedureS
Internet-Draft:
draft-condrey-rats-pop-appraisal-02
Published:
Intended Status:
Standards Track
Expires:
Author:
D. Condrey
WritersLogic

Proof of Process (PoP): Forensic Appraisal and Security Model

Abstract

This document specifies the forensic appraisal methodology and quantitative security model for the Proof of Process (PoP) framework. It defines how Verifiers evaluate behavioral entropy, perform liveness detection, and calculate forgery cost bounds. Additionally, it establishes the taxonomy for Absence Proofs and the Tool Receipt protocol for AI attribution within the linear human authoring process.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 18 August 2026.

Table of Contents

1. Introduction

The value of Proof of Process (PoP) evidence lies in the Verifier's ability to distinguish biological effort from algorithmic simulation. While traditional RATS [RFC9334] appraisals verify system state, PoP appraisal verifies a continuous physical process. This document provides the normative framework for forensic appraisal, defining the logic required to generate a Writers Authenticity Report (WAR).

2. Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

3. Step-by-Step Verification Procedure

A Verifier MUST perform the following procedure to appraisal a PoP Evidence Packet:

  1. Chain Integrity: Verify the SHA-256 hash link between all checkpoints. Any break invalidates the subsequent chain.
  2. Temporal Order: Re-calculate VDF outputs (Iterated Hash) or verify Pietrzak proofs for each segment. Ensure claimed duration matches the VDF difficulty.
  3. Entropy Threshold: Ensure min-entropy (H_min) exceeds 128 bits per segment. Low-entropy segments are flagged as "Non-Biological."
  4. Entanglement: Verify the HMAC signature (entangled-mac) over the combined document, jitter, and physical state.
  5. State Matching: Reconstruct the document from the semantic event transcript and verify the final hash matches the document-ref.

4. Forensic Assessment Mechanisms

The appraisal logic is designed to detect "Synthetic Authoring"—content generated by AI and subsequently "back-filled" with timing and hardware attestation.

SNR (Signal-to-Noise Ratio) Analysis:
Verifiers MUST evaluate the stochasticity of jitter. High-precision robotic injection often exhibits "perfect" noise or periodic patterns. Human motor signals exhibit 1/f fractal noise signatures [Monrose2000] that are computationally expensive to simulate realistically.
Cognitive Load Correlation (CLC):
To defeat high-fidelity AI jitter models, Verifiers MUST correlate timing patterns with semantic complexity. Human authors exhibit increased inter-keystroke intervals (IKI) and pause frequency during the composition of high-entropy segments (e.g., technical definitions) compared to low-entropy segments (e.g., common connectors). A simulation that maintains a constant biological signature regardless of content complexity MUST be flagged as a "Semantic Mismatch."
Mechanical Turk Detection:
Analyzes intra-checkpoint correlation (C_intra) to detect "robotic pacing"—where an automated system maintains a machine-clocked editing rate. Statistical correlation between pause duration and subsequent edit complexity is a mandatory check.
Error Topology Analysis:
Human authors exhibit characteristic patterns: Localized corrections near recent insertions, fractal self-similarity in revision patterns across different time scales, and a specific ratio of deletions to new content consistent with human cognitive processing.
QR Presence Challenge (OOB-PC):
To bridge the digital-physical gap, the Attester MAY issue an Out-of-Band Presence Challenge. A non-deterministic QR code is displayed on the primary screen, which the author MUST scan with a registered secondary device (e.g., smartphone). The secondary device signs the challenge with its own hardware-bound key. Verifiers MUST verify the wall-clock alignment of this OOB signature within the checkpoint chain.

5. Forgery Cost Bounds (Quantified Security)

Forgery cost bounds provide a Verifier with a lower bound on the computational resources required to forge an Evidence Packet. The cost (C_total) is computed as:

  C_total = C_vdf + C_entropy + C_hardware

Verifiers MUST include these estimates in the WAR to allow Relying Parties to set trust thresholds based on objective economic risk.

6. Absence Proofs: Negative Evidence Taxonomy

Absence proofs assert that certain events did NOT occur during the monitored session. They are divided into categories based on verifiability:

Type 1: Computationally-Bound Claims
Verifiable from the Evidence Packet alone (e.g., "Max single delta size < 500 bytes" or "No checkpoint timestamps out of order").
Type 2: Monitoring-Dependent Claims
Require trust in the AE's event monitoring (e.g., "No paste from unauthorized AI tool" or "No clipboard activity detected"). Trust in these claims MUST be weighted by the declared Attestation Tier (T1-T4).
Type 3: Environmental Claims
Assertions about the execution environment (e.g., "No debugger attached" or "Hardware temperature remained within stable physical bounds").

7. Tool Receipt Protocol (AI Attribution)

When external tools (LLMs) contribute content, the framework enables a "compositional provenance" model:

  1. Receipt Signing: The Tool signs a "Receipt" containing its tool_id, an output_commit (hash of generated text), and an optional input_ref.
  2. Binding: The human Attester records a PASTE event in the transcript referencing the Tool Receipt's commitment.
  3. Countersigning: The Attester binds the Receipt into the next human-driven checkpoint, anchoring the automated work into the linear human effort.

Verifiers appraisal the ratio of human-to-machine effort based on these receipts and the intervening VDF-proved intervals.

8. Adversary Model and Goals

The security model assumes a "Rational Forger" whose goal is to minimize compute cost while maximizing forensic confidence.

9. Privacy and Inclusivity

9.1. Privacy Considerations and Evidence Quantization

High-resolution behavioral data poses a stylometric de-anonymization risk [Goodman2007]. Implementations SHOULD support Evidence Quantization, reducing timing resolution to a level that maintains forensic confidence (detecting robots) while breaking unique author fingerprints.

9.2. Accessibility and Assistive Modes

Verifiers MUST NOT automatically reject evidence based solely on atypical timing patterns. Implementations MUST support "Assistive Modes" that adjust SNR and CLC thresholds for authors with motor disabilities or those using assistive technologies (eye-tracking, dictation).

10. IANA Considerations

This document has no IANA actions. All IANA registrations for the PoP framework are defined in [PoP-Protocol].

11. Security Considerations

This document defines forensic appraisal procedures that inherit and extend the security model from [PoP-Protocol]. The broader RATS security considerations [Sardar-RATS] also apply. Implementers MUST consider the following security aspects:

11.1. Entropy Manipulation Attacks

An adversary may attempt to inject synthetic jitter patterns that satisfy entropy thresholds while lacking biological origin. Verifiers MUST employ multi-dimensional analysis (SNR, CLC, Error Topology) rather than relying on single metrics. The correlation between semantic content complexity and timing variation provides defense-in-depth against high-fidelity simulation.

11.2. Verifier Trust Model

The forensic assessments defined in this document produce probabilistic confidence scores, not binary determinations. Relying Parties MUST understand that forgery cost bounds represent economic estimates, not cryptographic guarantees. Trust decisions SHOULD incorporate the declared Attestation Tier (T1-T4) and the specific absence proof types claimed.

11.3. Stylometric De-anonymization

High-resolution behavioral data (keystroke timing, pause patterns) can enable author identification even when document content is not disclosed. Implementations SHOULD support Evidence Quantization to reduce timing resolution while maintaining forensic utility. The trade-off between forensic confidence and privacy MUST be documented for Relying Parties.

11.4. Assistive Mode Abuse

Adversaries may falsely claim assistive technology usage to bypass behavioral entropy checks. Verifiers SHOULD require consistent assistive mode declarations across sessions and MAY request additional out-of-band verification for mode changes. The WAR MUST clearly indicate when assistive modes were active.

12. References

12.1. Normative References

[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.
[RFC9334]
Birkholz, H., Thaler, D., Richardson, M., Smith, N., and W. Pan, "Remote ATtestation procedureS (RATS) Architecture", RFC 9334, DOI 10.17487/RFC9334, , <https://www.rfc-editor.org/info/rfc9334>.

12.2. Informative References

[Goodman2007]
Goodman, A. and V. Zabala, "Using Stylometry for Biometric Keystroke Dynamics", , <https://doi.org/10.1007/978-3-540-77343-6_14>.
[Monrose2000]
Monrose, F. and A. Rubin, "Keystroke dynamics as a biometric for authentication", , <https://doi.org/10.1145/351427.351438>.
[PoP-Protocol]
Condrey, D., "Proof of Process (PoP): Architecture, Evidence Format, and VDF", Work in Progress, Internet-Draft, draft-condrey-rats-pop-protocol-03, , <https://datatracker.ietf.org/doc/html/draft-condrey-rats-pop-protocol-03>.
[Sardar-RATS]
Sardar, M.U., "Security Considerations for Remote ATtestation procedureS (RATS)", , <https://www.researchgate.net/publication/380430034_Security_Considerations_for_Remote_ATtestation_procedureS_RATS>.

Appendix: Verification Constraint Summary

The following constraints MUST be verified by conforming Verifiers:

  1. VDF Continuity: H(out_{i-1}, content_i, jitter_i) == in_i.
  2. Temporal Monotonicity: Segment timestamps strictly exceed predecessors.
  3. Chain Integrity: SHA-256 hash chain is unbroken.
  4. Entropy Threshold: Jitter entropy > 128 bits per segment.

Author's Address

David Condrey
WritersLogic Inc
San Diego, California,
United States