Architecture
High-level
flowchart TD UI["Web UI"] CLI["CLI"] subgraph Server["tyr-server"] REST["REST :7701"] GRPC["gRPC :7700"] Engine["Policy engine"] DB[(PostgreSQL 17)] REST --- DB GRPC --- DB Engine --- DB end UI <--> REST CLI <--> REST Laptop["tyrd (laptop)<br/>eBPF + LSM"] HostSrv["tyrd (server)<br/>eBPF + LSM"] VM["tyrd (VM)<br/>eBPF + LSM"] Laptop -- mTLS --> GRPC HostSrv -- mTLS --> GRPC VM -- mTLS --> GRPCComponents
tyr-server — control plane
- REST API on
:7701— login, setup, agent management, policy CRUD, event SSE, approvals (roadmap), CA cert download. Also serves the static Svelte web UI. - gRPC on
:7700— agent-facing only. mTLS with client certs issued during enrollment. Two-way streams for events + policy push. - PostgreSQL 17 — policies, agents, events, users, assignments, enrollment tokens, drift detection results. All state lives here.
- Internal PKI — server runs a CA; every agent gets a client certificate during enrollment.
tyrd — agent
- eBPF programs attached to:
- LSM hooks (
file_open,bprm_check_security,socket_connect) - Tracepoints (
sys_enter_openat,sched_process_exec)
- LSM hooks (
- Policy evaluator — two layers:
- Hot path (kernel): BPF maps with path prefixes, exact-match execs, IP CIDRs.
EPERMreturned in-kernel for denies. - Rich path (userspace): Cedar evaluator for conditions that don’t fit in a BPF map (e.g. principal X with context Y).
- Hot path (kernel): BPF maps with path prefixes, exact-match execs, IP CIDRs.
- TLS uprobe capture (optional,
--tls-capture) — attaches uprobes to OpenSSL/rustls to extract SNI hostnames from outbound TLS handshakes. Used for LLM-provider tagging. - Event pipeline — ring buffer → batcher → gRPC stream → server.
tyr — CLI and Svelte web UI
Two interfaces over the same REST API. Admins can do everything from either.
Event flow
flowchart TD AI["AI process<br/>open('/etc/.aws/credentials')"] subgraph K["Linux kernel"] LSM["LSM hook file_open"] Map["tyrd BPF map<br/>match prefix /etc/<br/>verdict = deny"] LSM --> Map Map -- "EPERM" --> AI Map -- "emit event" --> RB[["ring buffer"]] end AI --> LSM subgraph US["tyrd userspace"] Enrich["• enrich (pid → proc)<br/>• buffer + batch"] end RB --> Enrich Enrich -- "gRPC" --> Srv["tyr-server"] Srv --> Persist["persist, drift-check,<br/>SSE broadcast to UI/CLI"]Drift detection
Because the server also runs the policy engine, it can re-evaluate every event against the canonical policy and compare the result to the agent’s in-kernel verdict.
If they differ (because the agent had a stale policy version, a BPF map lagged update, or an agent was tampered with), the event is flagged with drift_detected = true and the server_verdict + agent_verdict columns both stored. This is your safety net for detecting compromised agents.
Cross-environment story
The same tyrd binary works on:
- Developer laptops — enroll once, leave running.
- Bare-metal / VMs — systemd service, enrolls on install.
- Containers on Linux — privileged sidecar with host PID namespace.
- Kubernetes — roadmap, DaemonSet with CRDs (see ADR-005).
One server, one policy source, many heterogeneous agents.
→ Next: Agents · Policies · Enforcement