COMPOSITEAPPS
Active AI-RMF · Army OT · Five Controls
Follow-on brief
Prepared for G-6 · target-architecture & sensor-tracking follow-on

The bridge from policy to operations.

Army RMF policy specifies what must be true about an AI system. Operations specifies what the commander has to do at mission speed. Today those are two different tracks. This brief lays out the five controls that make them one.

The Active AI-RMF runtime transforms risk from a static pre-deployment gate into live, continuous telemetry — allowing the Army to field new AI capabilities within the 30-day mandate while maintaining total integrity. Below: each of the five controls, what it does, what it produces, and how it maps to NIST 800-53, NCDSMO, and Army OT authority frameworks.

Active AI-RMF 30-day field mandate Continuous authorization Running · VA National ATO · FedRAMP High
Policy side
What RMF asks.
Behavior within authorized envelope. Provenance traceable to source. Decisions defensible. Counter-tested inspection. Capability mintable, scopable, revocable.
The bridge
Five active controls.
Behavior · Provenance · Glass-Box Output · Adversarial Triad · Agent Fleet. Each one generates its own evidence continuously. The ledger is the authorization package.
Operations side
What the commander does.
Signs determinations. Accepts risk on the record. Scales authority at AI speed. ATO stays intact. Mission stays on track.
The five controls

Each control is a policy ask met by an operational artifact.

Each of the five controls below satisfies a specific policy requirement and produces a specific evidence artifact the Authorizing Official, commander, and Inspector General can each reference. Controls are architecturally mechanical — not policy wrappers around existing tools. When a control fires, it leaves a cryptographic trace.

01
Behavior
Continuous envelope monitoring
Artifact · Behavior signature log
Policy asks
AI must operate within its authorized scope. No drift from baseline. No silent capability expansion. No out-of-policy output classes.
Operations needs
Immediate flag when an agent operates outside its sanctioned behavior envelope — before the deviation compounds into a compliance failure or an operational incident.
Composite provides
Per-inference behavioral fingerprint. Baseline envelope per agent class. Anomaly score computed and appended to every determination. Auto-gate on out-of-envelope; notification to ISSM and AO on threshold breach.
What is produced
Behavior signature log — append-only, per-agent, per-inference. Replayable. Hash-anchored. Reconstructible years later to establish what the agent was doing at any inference point in time.
Maps to NIST SP 800-53 AU-6 (Audit Review), SI-4 (Information System Monitoring), SI-7 (Software/Firmware Integrity) · DoDI 8500.01 behavioral-baseline monitoring · Army AR 25-2 continuous-monitoring requirement
02
Provenance
Cryptographic chain of custody · sensor-to-decision
Artifact · Provenance Ledger
Policy asks
Every input, every decision, every output must be traceable to its source with non-repudiable evidence. No dark pipes. No silent data.
Operations needs
Replay-able timeline for every sensor feed, model inference, and downstream decision — to answer “where did this come from?” in minutes, not days.
Composite provides
Every input hashed on ingress. Every model inference tagged with weight hash, policy version, timestamp, and upstream chain. Every output sealed and anchored to time. Ledger is append-only, cryptographically chained, and readable without the runtime present.
What is produced
Provenance Ledger — a replayable chain from sensor pixel (or packet) to signed determination to downstream consumer. This is the direct answer to Earl’s network-sensor-tracking question.
Maps to NIST SP 800-53 AU-10 (Non-repudiation), AU-11 (Audit Record Retention), SA-12 (Supply Chain Protection), SC-16 (Transmission of Security Attributes) · NDP-1 §5.2.3 provenance requirements · DoDI 8582.01 data-management lineage
03
Glass-Box Output
IG-defensible determination · sensitivity · context · authority
Artifact · Glass-Box determination
Policy asks
AI decisions must be explainable, defensible, reconstructible to the Authorizing Official, the Inspector General, and in oversight hearings — years after the operation.
Operations needs
Human reviewer can read the reasoning in plain language, verify the citations, and challenge the determination without reverse-engineering a model.
Composite provides
Every determination decomposes into three human-readable lenses: sensitivity (what classification / protection posture applies), context (what operational frame and precedent), authority (whose signature is required). Confidence score. Citation to policy corpus. Full disputation transcript attached.
What is produced
Glass-Box determination card — the human-signable, human-defensible artifact. See our cross-domain release and risk-acceptance examples for the visual treatment. Every signed Composite decision is one of these.
Maps to NIST SP 800-53 CA-7 (Continuous Monitoring), PM-7 (Enterprise Architecture) · DoD AI ethical principles: traceable, reliable · EO 14110 §4.5 transparency-for-deployed-AI requirements
04
Adversarial Triad
Counter-tested by construction · Advocate / Defender / Arbiter
Artifact · Disputation transcript
Policy asks
AI decisions must be counter-tested — examined by a mechanism whose design prevents it from finding what it wants to find. NCDSMO’s raise-the-bar makes this explicit.
Operations needs
Assurance that the system didn’t just confirm its own bias. A single model can’t counter-test itself. An ensemble voting yes together can’t either.
Composite provides
Two opposing agents architecturally separated — no shared process, no shared context, no counterparty reflection. A third Arbiter reasons from artifacts only. Disputation is preserved and attached to the determination. Counter-testing is a property of the architecture, not a procedure.
What is produced
Disputation transcript — both agents' briefs, their counter-arguments, their cited policy references, and the Arbiter’s reasoning. Attached to every Glass-Box output. Reproducible from artifacts by any competent reviewer.
Maps to NCDSMO raise-the-bar (counter-tested content inspection) · NIST SP 800-53 SA-15 (Development Process), SA-11 (Developer Security Testing) · DoD AI ethical principle: robust
05
Agent Fleet
Mint · scope · revoke at AI speed
Artifact · Agent attestation record
Policy asks
AI capabilities must be mintable, scopable, and revocable on the commander’s authority. No forever-agents. No un-scoped deployments. Kill-switch per agent, per mission.
Operations needs
Commanders authorize specific agents for specific missions, for specific durations, and revoke them instantly when scope changes. Changes take effect on the next inference, not the next deployment.
Composite provides
Library of mission-shaped agents. Each deployment is an attestation: who (authorizer), what (agent identity + version), when (validity window), scope (boundary, inputs, output classes), revocation conditions. Control-plane UI for the governance officer. Every action logged to the ledger.
What is produced
Agent attestation record — signed authorization with scope and revocation conditions. Revocation creates an immutable end-of-life marker. Authority flows from the commander, enforced by the runtime.
Maps to NIST SP 800-53 AC-2 (Account Management), AC-3 (Access Enforcement), AC-24 (Access Control Decisions), CM-3 (Configuration Change Control) · Zero Trust reference architecture: user & workload attestation
Continuous authorization · three views

The same ledger. Three consumers.

The five controls produce one evidence substrate. Three different roles read it differently, continuously, without waiting for an annual cycle. This is the operational meaning of Continuous Authorization.

Authorizing Official

“Is my confidence in this system still justified?”

The AO sees a live posture dashboard, not a paper package. Confidence is updated on every determination. When it drops below threshold, the system proactively flags the AO; when it rises on remediation, the ledger reflects it the same way.
  • Dashboard: posture · confidence trend · control state
  • Signals: anomaly rate · override rate · disagreement drift
  • Cadence: continuous · no annual reassessment required
Commander

Has what he needs to assume risk on the record.

Commander receives the Glass-Box determination with compensating controls, the three-agent transcript, and the mitigation milestone. Signs risk acceptance. ATO stays intact. Mission stays on track.
  • Artifact: risk-acceptance determination (see /risk-acceptance.html)
  • eMASS: POA&M updated under signed acceptance
  • Authority: kinetic authority unchanged · data authority bounded
Inspector General

Can reconstruct any decision, any time.

IG pulls any determination from any point in time. Replays the provenance chain. Reads the disputation transcript. Verifies the authority chain. The record outlives the operation, the administration, and the system.
  • Access: append-only ledger · read without runtime
  • Replay: from input hash to signed output
  • Authority chain: every signer, every concurrence preserved
Follow-on · Earl Allen & Long

What we map together in the next session.

Proposed scope for the joint working session

The five controls → Army target architecture → 30-day pilot scope.

The purpose of the follow-on is to take the five controls as a reference and map them concretely onto the Army’s target architecture — with particular emphasis on Earl’s sensor-tracking requirement. Composite owns the bridge. Earl’s team owns the sensor fabric and the target platforms. The mapping is where they meet.

Sensor inventory
Which network sensor types are in scope for the pilot. Ingress pattern per sensor. Cadence. Format. Boundary crossing points.
Provenance-ledger integration
How sensor outputs enter the ledger. Hash strategy. Upstream attribution. Replay access for Earl’s team.
Target platforms
Which OT / tactical systems are candidates for Active AI-RMF coverage. Accreditation boundary per platform. Existing eMASS packages.
Commander-authority mapping
Who holds risk-acceptance authority at each echelon. Delegation pattern. Re-assessment cadence. Revocation conditions.
Pilot workload
One designated first application. Scoped to a 30-day field window. Signed work statement as the exit of this session.
Success criteria
Three measurable outcomes — performance, agreement with senior reviewers, governance posture — agreed on paper before Week 0 begins.
Proof · not a concept
FedRAMP High
Under VA National ATO
$250B
Adjudicated per year
200M
Claims annually · 400 systems
2025
In production since
Policy is one track. Operations is the other. The five controls are the rail that lets the commander’s train cross between them. Active AI-RMF · not a slide · a runtime