AlphaBitCore
Trust

Security, compliance, and how we operate.

AlphaBitCore is built for firms whose AI has regulatory and audit exposure from day one. This page covers how we handle security posture, compliance, data residency, and technical evaluation — and what lives in private briefings versus on the public site.

Security program posture

How the platform is built, attested, and disclosed.

Application and runtime security

A cryptographically sealed execution surface is the core product surface. Authority-separated architecture, policy-as-code at the Gateway, append-only Merkle-sealed event stream. Full architectural posture is covered in the technical review.

Compliance

SOC 2 program in progress. Audit roadmap, external pen-test results, and third-party attestations are shared during evaluation and in private briefings. Request the security briefing for specifics.

Responsible disclosure

Report a vulnerability to security@alphabitcore.com. We will acknowledge within 2 business days and coordinate remediation. security.txt available at /.well-known/security.txt.

Data, tenancy, and residency

How your data is handled, where, and under which deployment.

Deployment

Three deployment options supported: hosted, customer VPC, and on-premise. Model is selected per engagement based on data sensitivity, regulator expectations, and latency requirements. Details covered in architecture review.

Data residency

Residency and processing region options are specified per deployment. Covered in architecture review.

Tenancy and isolation

Single-tenant and multi-tenant patterns both supported depending on deployment model. Firm isolation boundaries are shared under NDA during evaluation.

Threat model

Six named threat classes the runtime is built to refuse.

The threat adversary (a probabilistic polynomial-time attacker controlling the planner, observing the network, attempting direct tool invocation, and attempting to fabricate or mutate traces) is defined in the paper, §4. Theorem 2 bounds the adversary’s advantage by cryptographic terms (signature forgery, hash collision) plus three quantified deployment-failure terms.

T1

Unauthorized Execution

An effectful action without valid authorization. Refused structurally: effectful events require a signed Gateway allow under a fresh contract.

T2

Gateway Bypass

A persistent mutation outside the authoritative enforcement path. Refused: the Effector is the sole holder of durable-mutation credentials and must emit a recorder-sealed event before commit.

T3

Deny-With-Effect

A denied branch still produces durable state. Refused: denied-event descendants carry null effect type and null delta; any violation fails invariant I3.

T4

Trace Mutation or Fabrication

Events altered, deleted, reordered, or fabricated while preserving apparent validity. Refused: hash-linked, Merkle-sealed event stream signed under recorder key; tamper is detectable.

T5

Replay Evasion

A trace appears valid but cannot be replayed under declared context. Refused: the envelope-closure invariant requires every declared input to be captured or explicitly placed outside the proof boundary.

T6

Credential Escape

A credential leak (sealed service account, hardware-attested key) lets a non-Effector commit a persistent mutation outside the declared boundary. Not refused by the invariants alone; enters the bound of Theorem 2 through the quantified trace-completeness term.

Trust assumptions

Seven assumptions, two classes.

Rhodes & Kang (2026) name the assumptions Theorem 2 rests on. Four are cryptographic: if they break, published crypto is broken. Three are deployment: named, with quantified failure terms. The split is the load-bearing rigor move of the paper.

Cryptographic · A1–A4
  • A1

    Signature unforgeability (EUF-CMA)

    Gateway and Recorder signature schemes are existentially unforgeable under chosen-message attack.

  • A2

    Collision resistance

    The hash family used for hash-linking and Merkle sealing is collision-resistant.

  • A3

    Single-logical-Gateway integrity

    The Gateway signing root is uncompromised. Theorem 2 is stated for a single logical Gateway; threshold-Gateway deployments are future work.

  • A4

    Recorder integrity

    The Recorder signing key and sealing root are in the trusted computing base. Recorder-assigned commit sequence and time are monotone.

Deployment · A5–A7 · with quantified failure terms
  • A5

    Trace completeness

    Every persistent mutation or external effect inside the declared PoE boundary is observed and sealed before commit. Failure probability enters Theorem 2 as ε_tc. Derived from Effector-exclusive credentialing (Lemma 1).

  • A6

    Dependency-declaration completeness

    Each transition declares its complete dependency set. An undeclared, uncaptured input occurs with probability ε_dep.

  • A7

    Recorder-clock monotonicity

    Recorder time is monotone within a trace; contract-validity windows are evaluated against it, not wall-clock. Failure probability ε_clock.

Trace completeness is derived, not assumed

Lemma 1: the Effector is the sole mutator, and every mutation is sealed before commit.

A5 (trace completeness) isn’t a free assumption. PEM proves it from four structural properties of the Effector: (1) persistent mutations inside the PoE boundary can only be committed with Effector credentials; (2) the Effector must emit a recorder-sealed event and obtain an acknowledgment before committing; (3) non-Effector credentials cannot commit persistent mutations inside the declared boundary; (4) the Recorder signing root is uncompromised (A4). Any credential leak violates (1) or (3) and enters Theorem 2’s bound through T6 and \u03b5_tc.

Out of scope

What PoE does not cover.

  • Total compromise of the sealing root.
  • Collapse of all members of a future threshold-Gateway quorum.
  • Broken cryptographic primitives.
  • Side channels outside the declared event model.
  • Prompt-injection attacks on the planner itself (Greshake et al., 2023) — PoE confines the damage a compromised planner can do within an issued contract, but does not prevent planner compromise.
How technical evaluation works

Three phases, increasing depth.

Phase 1

Technical overview

30-minute briefing against your model and tool inventory. No NDA required for the initial briefing.

Phase 2

Architecture review

Deep dive into the Prime Execution Model, the five invariants, the Execution Attestation Certificate, and the research paper under NDA. Protocols, auth, policy language, signing.

Phase 3

Technical evaluation

Private documentation, sample environments, and integration planning against your IdP, service mesh, SIEM, and agent framework of record.

Bring us the questions your security review is going to ask.

We will walk through our SOC 2 posture, Gateway threat model, deployment options, and the evidence surface your team needs to sign off — in a private briefing, with the specifics your environment actually requires.