Platform v3.0 Stable

Stryda Documentation

Governance and execution control plane for AI agents. Define policies, enforce at runtime, record every decision to a tamper-evident ledger, and integrate with any MCP-speaking client. This guide covers the full platform — from signup to API reference.

What is Stryda?

Stryda sits between your AI agents and the tools they call. Every request runs through a fixed pipeline — validate → policy → approval → adapter → audit — enforced on the backend, not trusted from the client. Agents get deterministic authorization, operators get a tamper-evident record, and compliance teams get evidence on demand.

Runtime governance

Every agent action checked against scopes, budgets, and policies before execution. The pipeline — validate → policy → approval → adapter → audit — runs for every tool call. No shortcuts.

Hash-chained audit

Every decision lands in a tamper-evident ledger. Each entry carries the hash of the previous one, so a deletion or mutation breaks the chain. Export any slice as evidence.

One MCP control plane

A single MCP endpoint at /api/mcp exposes every tool, namespaced by surface (comms.*, payments.*, crm.*, workflow.*). One audit trail, one policy language, one approval queue.

Prerequisites

Before you start, you'll want the following:

Stryda account

Free signup at stryda.ai gets you the dashboard, 100 free runs, and the full governance stack.

AI provider key

One key from Anthropic, OpenAI, or Google. Keys are encrypted at rest and never leave your workspace.

API access

Generate a Stryda API key in Settings → API Keys for programmatic workflow triggers.

Delivery channel

Optional — connect Slack, email, or webhooks for notifications, escalations, and run results.

Quick start

Get your first agent under governance in about five minutes.

  1. 1

    Create an account

    Sign up at stryda.ai with email + password or Google SSO. You land on a dashboard with sample workflows, chatbots, and agents pre-loaded so the UI is never empty.

  2. 2

    Add an AI provider key

    Open Settings → API Keys, pick Anthropic / OpenAI / Google, paste your key, save. At least one key is required before any AI step can execute.

  3. 3

    Create a workflow

    Click "+ New Workflow" or pick a template. The visual editor opens with a Trigger node. Drag AI Action, Condition, Output, or Approval nodes from the toolbar.

  4. 4

    Configure AI nodes

    Each AI Action node picks a model (e.g. Claude Sonnet 4.6), a prompt, and optionally temperature + max tokens. Reference upstream data with {{node_id.output}}.

  5. 5

    Run a test

    Click "Run" — each node lights up as it executes. The Run History panel shows inputs, outputs, token usage, and the exact cost per step.

  6. 6

    Publish and integrate

    Publish the workflow, then open the Integrate tab for API, webhook, cURL, JavaScript, and Python snippets to trigger it from any system.

Trigger a workflow run via the REST API

curl
curl -X POST https://stryda.ai/api/v1/workflows/WORKFLOW_ID/run \
  -H "x-api-key: ${STRYDA_API_KEY}" \
  -H "Content-Type: application/json" \
  -d '{"input_data": {"prompt": "Summarize this customer email"}}'
Node.js
const res = await fetch('https://stryda.ai/api/v1/workflows/WORKFLOW_ID/run', {
  method: 'POST',
  headers: {
    'x-api-key': process.env.STRYDA_API_KEY,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    input_data: { prompt: 'Analyze this support ticket' },
  }),
});

const run = await res.json();
console.log(run.run_id, run.status); // "run_abc123" "running"
Python
import os, requests

res = requests.post(
    "https://stryda.ai/api/v1/workflows/WORKFLOW_ID/run",
    headers={"x-api-key": os.environ["STRYDA_API_KEY"]},
    json={"input_data": {"prompt": "Analyze this support ticket"}},
)
run = res.json()
print(run["run_id"], run["status"])  # run_abc123 running

Explore the docs

Key concepts

ConceptDescription
WorkflowA visual automation built from connected nodes. Triggered manually, via API, webhook, or cron.
NodeA step in a workflow: Trigger, AI Action, AI Decision, Condition, Approval, Output, Webhook, or Delay.
AgentAn autonomous AI that uses tools through the MCP control plane. ReAct-style execution with configurable scopes and limits.
ScopeA bounded set of tools + resource patterns an agent is authorized to use (e.g. comms.slack.send, channels matching #ops-*).
PolicyA rule evaluated before every tool call: allow, deny, or require approval. Policies run inside the pipeline on the backend.
Approval gateA human-in-the-loop checkpoint. Execution pauses until a reviewer approves, rejects, or the timeout expires.
Budget capA per-run or monthly spending limit. Stryda halts execution when the cap is reached and logs the denial.
Ledger entryOne immutable row in the audit trail. Carries request, decision, policy trace, and the previous row's hash.
MCP endpointA single server at /api/mcp exposing every tool namespaced by surface. Clients connect once; capabilities are discovered.
AdapterThe final stage in the pipeline — what actually talks to Slack, Stripe, HubSpot, etc. Adapters never bypass policy.

Supported AI models

Stryda routes prompts to models across three providers. Pick per step based on quality, speed, and cost — every token is logged to the audit ledger with its USD cost.

Anthropic

  • Claude Opus 4.7
  • Claude Sonnet 4.6
  • Claude Haiku 4.5

OpenAI

  • GPT-4.1
  • GPT-4.1 Mini
  • GPT-4o
  • o3
  • o4-mini

Google

  • Gemini 2.5 Pro
  • Gemini 2.5 Flash
  • Gemini 2.0 Flash

Troubleshooting

Next steps