| Internet-Draft | RATS and Behavioral Evidence | January 2026 |
| Kamimura | Expires 12 July 2026 | [Page] |
This document provides an informational discussion of the conceptual relationship between remote attestation, as defined by the RATS (Remote ATtestation ProcedureS) Working Group, and behavioral evidence recording mechanisms. It observes that attestation and behavioral evidence recording address fundamentally different verification questions and can serve as complementary layers in comprehensive system accountability frameworks. This document is purely descriptive and does not propose any modifications to RATS architecture, define new attestation mechanisms, or establish normative requirements.¶
This note is to be removed before publishing as an RFC.¶
Discussion of this document takes place on the Remote ATtestation ProcedureS (RATS) Working Group mailing list (rats@ietf.org), which is archived at https://mailarchive.ietf.org/arch/browse/rats/.¶
Source for this draft and an issue tracker can be found at https://github.com/veritaschain/draft-kamimura-rats-behavioral-evidence.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 12 July 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The IETF RATS (Remote ATtestation ProcedureS) Working Group has developed a comprehensive architecture for remote attestation, enabling Relying Parties to assess the trustworthiness of remote systems through cryptographic evidence about their state. This attestation capability addresses a fundamental question in distributed systems: "Is this system in a trustworthy state?"¶
A related but distinct verification need exists in many operational contexts: the ability to verify what actions a system has actually performed. This question - "What did the system actually do?" - is addressed by cryptographically verifiable audit trails, which record system behaviors and decisions in tamper-evident formats.¶
This document observes that these two verification capabilities address different aspects of system accountability and can conceptually complement each other without overlapping in scope or responsibility. The document provides an explanatory framework for understanding how attestation and audit trails could relate within broader accountability architectures.¶
This document is purely INFORMATIONAL and NON-NORMATIVE. It:¶
The language in this document uses descriptive terms (MAY, COULD, CAN) exclusively to indicate possibilities and observations. This document does not use normative requirements language (MUST, SHOULD, SHALL) as there are no mandatory behaviors or requirements being specified.¶
This document treats verifiable audit trail systems in general terms, using VeritasChain Protocol (VCP) [VCP-SPEC] as one illustrative example among various possible approaches to cryptographic audit logging.¶
Several domains increasingly require both trustworthiness assessment and behavioral verification:¶
Understanding the conceptual relationship between attestation and audit trails could help architects design accountability frameworks that leverage both capabilities appropriately, without conflating their distinct purposes.¶
This document reuses terminology from the RATS Architecture [RFC9334] without modification. The following terms are used as defined in that document:¶
The following terms are used in this document to describe audit trail concepts in general terms:¶
Note on "Audit" Terminology: The term "audit" in this document refers solely to post-hoc examination of recorded system behavior. It does not imply regulatory auditing, compliance verification, or financial auditing in any jurisdiction-specific sense. This usage is consistent with general systems engineering terminology (e.g., "audit log") rather than domain-specific compliance frameworks.¶
This section describes an observational framework for understanding how attestation and audit trails could conceptually relate as distinct layers of system accountability.¶
The RATS architecture addresses trustworthiness assessment through remote attestation. At its core, attestation answers questions about system state:¶
These questions are fundamentally about the properties and characteristics of a system at a point in time or across a measurement period. The RATS architecture provides mechanisms for generating, conveying, and appraising Evidence that enables Relying Parties to make trust decisions about Attesters.¶
Key characteristics of attestation as defined by RATS:¶
Cryptographically verifiable audit trails address a different category of verification need. Rather than assessing system state, audit trails record what a system has done:¶
These questions are fundamentally about system behavior over time. Verifiable audit trails could provide mechanisms for recording, preserving, and proving the integrity of behavioral records, enabling after-the-fact verification of system actions.¶
Key characteristics of audit trail systems (in general terms):¶
As an illustrative example, VCP [VCP-SPEC] defines audit trails using three integrity layers: event integrity (hashing), structural integrity (Merkle trees), and external verifiability (digital signatures and anchoring). Other audit trail systems could employ different but analogous mechanisms to achieve similar goals.¶
The distinction between attestation and audit trails can be understood as a separation of concerns:¶
| Aspect | Attestation (RATS) | Audit Trails |
|---|---|---|
| Primary Question | Is this trustworthy? | What happened? |
| Focus | System state | System behavior |
| Temporal Scope | Point-in-time or measurement period | Historical record |
| Primary Consumer | Relying Party | Auditor, Regulator |
| Trust Question | Should I interact? | Did it behave correctly? |
This separation suggests that attestation and audit trails could serve as complementary layers rather than alternatives. A system could potentially be subject to both attestation (verifying its trustworthy state) and audit logging (recording its subsequent behavior), with neither capability substituting for the other.¶
When viewed together, attestation and audit trails could address a more complete accountability picture:¶
Neither layer fully substitutes for the other:¶
The combination could potentially provide stronger accountability than either mechanism alone.¶
Attestation and audit trails may operate on different temporal rhythms:¶
A conceptual integration could involve attestation confirming system integrity at key moments, while audit trails continuously record behavior between those moments. This temporal complementarity means neither mechanism creates gaps that the other cannot fill, but together they could provide comprehensive temporal coverage.¶
Both attestation and audit trails ultimately rely on trust anchors, though potentially different ones:¶
In some deployment scenarios, these trust anchors could overlap or share infrastructure. For example, a hardware security module used as an attestation root of trust could potentially also sign audit trail records. However, this document does not prescribe any particular trust anchor arrangement.¶
This section provides a purely illustrative, non-normative example of how attestation and audit trails could conceptually complement each other in a hypothetical deployment. This example does not define any protocol or establish any requirements.¶
Consider a hypothetical automated trading system:¶
This example is purely conceptual and illustrative. Actual deployments would involve specific technical decisions not addressed in this document.¶
To maintain clarity about this document's limited scope, the following items are explicitly out of scope and are NOT addressed:¶
This document does NOT:¶
The purpose of this document is solely to observe and explain the conceptual relationship between attestation and audit trails as distinct but potentially complementary verification layers. Any technical integration or standardization work would require separate documents with appropriate community review.¶
This document is purely informational and does not define any protocols or mechanisms. Therefore, it does not introduce new security considerations beyond those already present in the referenced specifications.¶
The following observations may be relevant to deployments that consider both attestation and audit trails:¶
Cryptographically verifiable audit trails face their own security considerations, including:¶
If both attestation and audit trails are deployed together, architects could consider:¶
Attestation and behavioral evidence recording operate at different temporal granularities, creating distinct trust validation points:¶
The combination of both mechanisms could provide defense-in-depth, but architects should consider the trust assumptions and failure modes of each layer independently.¶
Deployments using both attestation and behavioral evidence recording could benefit from separating cryptographic keys used for each purpose. Attestation keys (bound to hardware roots of trust) and behavioral evidence signing keys (potentially in separate security domains) may have different lifecycle, rotation, and compromise-response requirements.¶
This document does not mandate any particular approach to these considerations, which would depend on specific deployment contexts and threat models.¶
This document does not alter the RATS threat model, and introduces no new attack surfaces beyond those already considered by existing attestation and logging mechanisms.¶
This document has no IANA actions.¶
The author thanks the RATS Working Group for developing the comprehensive attestation architecture that enables discussions of complementary verification layers. This document builds upon and respects the careful design work reflected in the RATS architecture.¶