Personal
Personal and evaluation use.
Atested governs AI operations before actions proceed. It prevents many unsupported actions from landing, and when something does go wrong it gives you definitive, auditable facts instead of reconstruction work.
Works with MCP-compatible AI tools including Claude Code, Cursor, Cline, and Windsurf. Self-hosted. One server. No cloud dependency, and no governance data leaves your network.
{
"tool": "fs_write",
"capability_class": "FS_WRITE",
"policy_decision": "ALLOW",
"timestamp_utc": "2026-03-30T13:12:00Z",
"operator_intent": "update README",
"user_identity": "bearer:e1f2a3b4c5d67890",
"organization_id": "acme-engineering",
"license_tier": "team",
"record_hash": "sha256:0a1b2c3d4e5f...",
"prev_record_hash": "sha256:f0e1d2c3b4a5...",
"signature": "ed25519:8f9c2ab1..."
}
Atested evaluates governed actions before they proceed and records the outcome in signed, immutable records. That changes AI operations from black-box activity into checkable events with durable evidence.
ALLOW: action satisfied active policy DENY: required proof, scope, or constraints missing RECORD: signed decision written to immutable chain PROOF: attestation artifacts available for later verification
Atested governs every action that flows through it. It cannot force every action to flow through it, because AI tools also have native capabilities outside any governance layer. The transparency metric makes that boundary visible and measurable.
{
"governed_operations": 1842,
"observed_native_operations": 716,
"transparency_ratio": "72%",
"observation_mode": "hook-reported",
"manager_view": "governed vs observed"
}
Every installation starts with a 30-day full-function trial. Atested recommends the appropriate tier from actual observed usage.
Personal and evaluation use.
Small teams shipping with AI agents.
Operational deployments needing stronger governance posture.
Custom terms and deployment support.