← Notes · Sovereignty · 18 April 2026 · ~6 min read

Zero egress is a measurement, not a marketing line.

Most “sovereign AI” claims collapse the moment a security team puts a tap on the wire. Here is the three-question test, the architectural patterns that actually pass it, and why “we don’t store your data” is not the same answer.

In every AI procurement conversation I’ve sat through in the last two years, a variation of the phrase “your data never leaves your environment” gets said out loud, confidently, by someone in a blue blazer. The phrase is almost always followed, on close reading of the architecture diagram, by something equivalent to “…except for the part where we call the inference endpoint in our cloud.” The two statements are incompatible. Only one of them can be true at a time. Usually, neither is actually tested.

This is not a moral failing of vendors. It is a problem with the vocabulary. “Sovereign.” “In-boundary.” “Zero egress.” These words do a lot of rhetorical work and very little technical work. They sound like measurements. They are usually postures. The only way to tell the difference is to stop talking and start measuring. That is what this note is about.

What a vendor usually means.

When a vendor says “your data stays in your environment,” they are typically making one of three claims. It’s worth being explicit about each, because buyers routinely conflate them:

Claim A · “Our SaaS endpoint is in GovCloud.”

True and meaningful for FedRAMP boundary questions. Not the same as zero egress. Your data is leaving your network and arriving at their tenant, even if that tenant happens to share a FedRAMP High accreditation. From your AO’s perspective, you have crossed a boundary. An inspector general can ask you what controls exist at that other tenant, and the answer is “theirs, not ours.”

Claim B · “Data is encrypted in transit.”

Necessary, not sufficient, not the same claim. Encrypted data is still egressing. Intercepted in transit, it can’t be read; intercepted at the destination, where the key lives, it can. Encryption is table stakes for any data-handling vendor. It does not answer the question a network security officer is actually asking, which is whether bytes crossed the boundary at all.

Claim C · “We don’t store your data after processing.”

This is the most dangerous claim because it sounds like a privacy commitment and is actually a retention policy. Data was still processed in their environment. It was still visible to their infrastructure, their operators, their co-tenant neighbors under certain attack models. That the vendor has chosen to purge it afterwards does not change the fact that it left you. It just means the receipts are harder to subpoena.

None of these claims are wrong, necessarily. They are just not the claim that “zero egress” should describe. Zero egress, read strictly, is a statement about the count of bytes that crossed the boundary. That count is either zero or it isn’t. It is measurable. It is reproducible. It is either filed as evidence, or it is a story.

The three-question test.

The next time a vendor tells you their AI platform is “sovereign,” or “boundary-contained,” or any word in that family, have your security team ask exactly these three questions. Don’t let the conversation move on until each one has a concrete answer.

1 · “Can we packet-capture during a representative workload?”

This is the test. Mirror the port their runtime sits on. Run a realistic set of cases through. Capture every byte that crosses the boundary for some window of time. Then read it. If the vendor hesitates, or needs to coordinate a “special mode,” or proposes a separate test environment that isn’t representative, the answer is already: not zero egress. If the capture shows outbound DNS lookups to their domains, outbound TCP sessions to their endpoints, or anything resembling telemetry — same answer.

2 · “Can we see your capture from the last accreditation?”

Vendors who actually run boundary-contained runtimes file packet captures and network baselines as evidence artifacts in their accreditation packages. If they can’t show you a redacted version of one from a prior deployment, you are almost certainly their first boundary-contained customer, which is fine to be — but it means the claim is aspirational, not historical.

3 · “Where are the models hosted, and by what authority are they updated?”

The architecture pattern that actually achieves zero egress is an open-weight model, hosted on your hardware, updated on a cadence you control. If the runtime calls out to an inference endpoint, you are not running the model — you are borrowing it through a remote procedure call, and your data is the payload of that call. If model updates are pushed automatically by the vendor, you do not control the model. If updates are pushed over the internet, a piece of the vendor’s infrastructure is live inside your boundary. None of those are fatal, but all of them are real.

What architecturally has to be true.

A runtime that passes the three-question test looks like this:

This is unglamorous engineering. None of it is novel. All of it is architecturally committing: the runtime has to be built for this posture from the beginning, because every “convenient” feature — auto-updates, usage analytics, error reporting, licensing check-in — is a small, polite hole in the boundary claim.

Why this matters beyond the checkbox.

The reason this is not a pedantic argument is that the boundary is doing work other than data-privacy work. It is the precondition for the evidence posture that makes AI governance enforceable at all (see the previous note). If your runtime is calling out to someone else’s inference endpoint, the evidence of what that endpoint did is not yours to file. It’s theirs. You can ask them for it; they’ll send you a log they chose to keep. You cannot reconstruct, at the bit level, what model answered your prompt, with what weights, under what policy, because none of that state lived in your boundary. The audit trail is missing its load-bearing segment.

A boundary-contained runtime is the opposite. Every inference is observable — by you, on your equipment — and the evidence chain is intact end to end. That is the difference between saying “we have an AI governance program” and being able to defend a determination two years later to an inspector general who wants to know what, exactly, the machine said and why.

A skeptic’s heuristic: ask the vendor to let your team packet-capture the runtime under a representative workload. If they hesitate for any reason, it isn’t zero egress. The test

A closing note on “almost.”

The most honest vendor answer to the three-question test is often “almost.” There’s usually one or two pieces — a license check-in, a metrics beacon, an update-notification channel — that the product team included for perfectly sensible engineering reasons and that the marketing team forgot to mention in the sovereign-AI slide. “Almost” is the right place to start the conversation. It tells you exactly what has to be removed or changed for the boundary claim to be literal. If the vendor is willing to do that work, you may have a path forward. If they aren’t, you have clarity about what you actually signed up for.

Zero egress is measurable. It should be measured. Ask for the capture.


If you want to see what a boundary-contained runtime looks like in operation, the Active Review demo runs a full cross-domain case end to end with zero bytes leaving the console’s simulated boundary. The engagement page describes how we verify the boundary during Phase 02 of activation — packet capture filed as evidence before Phase 03 begins.