Prefer governed tools in project instructions
Configure CLAUDE.md or the equivalent project instruction surface so agents prefer governed tools for sensitive operations.
Atested is self-hosted. You deploy one server, connect compatible AI tools to it, and begin routing governed actions through the MCP surface. The system produces governance records, transparency metrics, and attestation artifacts inside your own environment.
Deploy Atested on infrastructure you control. Connect your team's AI tools to the governed surface. Governed actions then flow through Atested for policy evaluation, signed record generation, and later verification.
The model is operationally simple: one governed server, multiple client tools, one audit trail.
Point compatible tools at the Atested MCP server, configure project instructions so governed tools are preferred on sensitive work, and connect observation hooks so the transparency metric reflects both governed and observed activity.
Atested works with MCP-compatible AI tools that can route actions through a custom MCP server, including Claude Code, Cursor, Cline, Windsurf, and other tools that expose the same connection model.
Compatibility matters because Atested governs actions that flow through its MCP surface. If a tool cannot send actions through that surface, Atested cannot govern those actions.
ChatGPT does not support custom MCP servers, so it cannot route actions through Atested.
Claude.ai chat's custom MCP connector path has known issues, so it is not a reliable Atested target today.
Any tool without MCP support cannot route governed actions through Atested.
Configure CLAUDE.md or the equivalent project instruction surface so agents prefer governed tools for sensitive operations.
Observation hooks let Atested count native activity that remains outside governance, so the transparency metric reflects reality instead of only governed flow.
Establish organizational rules that commits, production-adjacent edits, deployment changes, or other sensitive actions should go through governed paths.
Treat transparency as an operating signal. It shows how much activity is governed versus merely observed.
As new tool integrations become available, bring them under governance instead of assuming coverage will expand automatically.
Coverage improves when the team understands which actions flow through governance and which still occur natively in the client tool.
Atested governs every action that flows through it. It cannot force all actions to flow through it. AI tools have native capabilities that operate outside governance. This is a structural reality in open tool environments, not a defect specific to Atested.
The transparency metric makes that boundary visible and measurable so organizations can improve governance coverage with facts instead of assumptions.
{
"governed_operations": 1842,
"observed_native_operations": 716,
"transparency_ratio": "72%",
"observation_mode": "hook-reported",
"operator_goal": "increase governed coverage"
}
For environments where evidentiary enforcement is not sufficient on its own, Atested can be deployed with stronger structural controls such as credential-gated resource access and exclusive capability surfaces.
That is a custom deployment architecture, not the default product path.