Built for teams that can't afford
to get AI governance wrong.
Every agent action audited. Every policy enforced. Every deployment isolated.
Compliance & Audit
Authorization in Symbiont is not advisory — it is structural. Every agent action passes through Cedar, a formal authorization language developed by AWS and adopted for its mathematical verifiability. Policies define exactly what each agent can do, which tools it can invoke, and under what conditions. There is no fallback path that bypasses policy evaluation.
Every policy decision generates a cryptographic audit entry — signed, timestamped, and tamper-evident. These entries are created before execution, not after. If an action was permitted, the audit trail proves why. If it was denied, the trail proves that too. This is the difference between logging and accountability.
The result is an audit trail designed for SOC 2, HIPAA, and similar compliance frameworks. Your auditors get a verifiable chain of decisions. Your security team gets formal policy enforcement they can reason about. Your legal team gets evidence that governance was applied consistently, not selectively.
Observability
You cannot govern what you cannot see. Symbiont exports telemetry through OpenTelemetry — the industry standard — which means it works with whatever monitoring stack you already run. Grafana, Datadog, Splunk, Honeycomb. No proprietary dashboards, no vendor-specific agents, no additional infrastructure to maintain.
Telemetry covers three layers. Runtime health metrics track the system itself — uptime, request latency, error rates. Agent execution metrics track what your agents are doing — tool invocations, reasoning steps, policy evaluations per action. Token usage tracking gives you cost visibility across every agent and every model, broken down by task.
Both file-based and OTLP exporters are supported out of the box. File exporters give you local, grep-friendly logs for development. OTLP exporters push structured telemetry to your production collectors. Enable both simultaneously if you want belt-and-suspenders visibility.
Integration
Agents that cannot reach your teams are agents that cannot help. Symbiont ships native channel adapters for Slack, Microsoft Teams, and Mattermost — not as third-party plugins, but as first-class runtime components. Your agents communicate through the tools your people already use, with the same policy enforcement applied to every message.
For system-to-system communication, Symbiont provides webhook ingestion with provider presets for GitHub, Stripe, and custom sources. Webhooks are validated, parsed, and routed to agents as structured events. The HTTP API gives programmatic control over every runtime operation — start agents, evaluate policies, query audit logs, manage configurations.
Official SDKs in JavaScript and Python let your engineering team interact with the runtime from existing applications. Both SDKs are typed, documented, and follow the conventions of their respective ecosystems. If your team builds in JS or Python, integration is a dependency install, not an architecture change.
Deployment
Symbiont provides three sandboxing tiers, and you choose the isolation level per agent. Docker is the standard tier — familiar, fast, suitable for most workloads. gVisor adds a hardened user-space kernel that intercepts system calls, for agents that handle sensitive data. Firecracker runs each agent in a dedicated microVM with its own kernel, for maximum isolation when the risk model demands it.
Development starts on your laptop with symbi up. The same configuration that runs locally deploys to staging and production without modification. No rewriting agent definitions, no translating between environments, no surprises at deploy time. The development experience and the production runtime are the same system.
There is no vendor lock-in. Symbiont runs anywhere Docker runs — your cloud, your data center, your air-gapped network. The runtime is a single binary with no external dependencies beyond a container engine. You own your infrastructure, your data, and your agents.
The Trust Stack
Trust in AI agents is not a single feature — it is a chain. A chain where every link must hold. We built three open-source projects that form a complete chain of trust, from the tools your agents use, through the agents themselves, to the runtime that governs them.
Each project is independent and useful on its own. Together, they provide something no single project can: end-to-end verifiable trust for autonomous AI systems. Every tool verified. Every agent authenticated. Every action governed.
SchemaPin
Verify the tools your agents use haven't been tampered with. Cryptographic signatures on tool schemas, pinned to DNS. If a tool definition changes without a valid signature, the agent refuses to use it.
schemapin.orgAgentPin
Give every agent a cryptographic identity anchored to its domain. ES256 key pairs, domain-bound certificates, verifiable provenance. You always know which agent took an action and who operates it.
agentpin.orgToolClad
Declarative tool interface contracts. Every tool has a typed manifest that defines what parameters it accepts, how it's invoked, and what it returns. The LLM fills parameters — the executor validates and constructs commands. Shell injection is structurally impossible.
toolclad.orgSymbiont
Enforce policies, sandbox execution, audit everything. The runtime that ties verified tools and authenticated agents into a governed system. Cedar policies, multi-tier sandboxing, cryptographic audit trails.
docs.symbiont.dev