Governance infrastructure for AI operations

Every action. Evaluated. Recorded. Provable.

Every action is evaluated against the same logic a well-informed engineer would apply — then the evaluation and the decision are signed and provable.

Governance decisions fall into two kinds: those that can be determined logically, and those that require judgment. Atested resolves as many as possible deterministically — is the evidence present, is the scope valid, are the preconditions met. Where judgment is needed, that's recorded too.

Works with MCP-compatible AI tools including Claude Code, Cursor, Cline, and Windsurf. Self-hosted. One server. No cloud dependency, and no governance data leaves your network.

MCP-compatible works with existing AI tools that can route actions through governance
Self-hosted one governed server your team connects through, with governance data staying in your environment
decision-chain.jsonl ALLOW
{
  "tool": "fs_write",
  "capability_class": "FS_WRITE",
  "policy_decision": "ALLOW",
  "timestamp_utc": "2026-03-30T13:12:00Z",
  "operator_intent": "update README",
  "user_identity": "bearer:e1f2a3b4c5d67890",
  "organization_id": "acme-engineering",
  "license_tier": "team",
  "record_hash": "sha256:0a1b2c3d4e5f...",
  "prev_record_hash": "sha256:f0e1d2c3b4a5...",
  "signature": "ed25519:8f9c2ab1..."
}
Governed actions are evaluated before they proceed. When the action meets the required conditions — valid scope, sufficient evidence, constraints satisfied — Atested allows it and records the decision. When something verifiable is missing, Atested denies the action and records exactly what was absent.
Architecture: MCP server → governance evaluation → signed record → hash chain → attestation artifact.
What it does

Govern AI operations before they become reconstruction work

Atested evaluates governed actions before they proceed and records the outcome in signed, immutable records. That changes AI operations from black-box activity into checkable events with durable evidence.

governance-summary.txt LIVE
ALLOW: action met verifiable conditions
DENY: required scope, evidence, or constraints missing
RECORD: signed decision written to immutable chain
PROOF: attestation artifacts available for later verification
The system is designed to prevent many unsupported actions from landing and to preserve verifiable evidence when review is required later.
Governance transparency

Know how much of your AI activity is actually under governance

Atested governs every action that flows through it. It cannot force every action to flow through it, because AI tools also have native capabilities outside any governance layer. The transparency metric makes that boundary visible and measurable.

transparency-summary.json 72% governed
{
  "governed_operations": 1842,
  "observed_native_operations": 716,
  "transparency_ratio": "72%",
  "observation_mode": "hook-reported",
  "manager_view": "governed vs observed"
}
Governed operations produce full signed records. Ungoverned native operations can still be counted when your AI tools report them through observation hooks.
Pricing

Simple, self-demonstrating pricing

Every installation starts with a 30-day full-function trial. Atested recommends the appropriate tier from actual observed usage.

Personal

1 user
Free

Personal and evaluation use.

Business

11–100 users
$4,999/year

Operational deployments needing stronger governance posture.

Enterprise

100+ users
Contact us

Custom terms and deployment support.