How-To Guide
Step-by-step recipes for common AgentBreeder tasks.
How-To Guide
Recipes for common AgentBreeder workflows. Each section is self-contained — jump to what you need.
New here?
Start with the Quickstart → to install AgentBreeder and deploy your first agent in under 5 minutes. Come back here when you need a specific recipe.
Install AgentBreeder
pip install agentbreeder # full CLI + API server + engine
pip install agentbreeder-sdk # lightweight SDK onlybrew tap agentbreeder/agentbreeder
brew install agentbreeder# Full platform — no repo clone needed
curl -O https://raw.githubusercontent.com/agentbreeder/agentbreeder/main/deploy/docker-compose.standalone.yml
docker compose -f docker-compose.standalone.yml up -d
# Dashboard: http://localhost:3001 API: http://localhost:8000
# CLI image only (for CI/CD)
docker pull rajits/agentbreeder-cli
docker run rajits/agentbreeder-cli --helpnpm install @agentbreeder/sdkimport { Agent } from "@agentbreeder/sdk";
const agent = new Agent("my-agent", { version: "1.0.0", team: "eng" });git clone https://github.com/agentbreeder/agentbreeder.git
cd agentbreeder
python -m venv venv && source venv/bin/activate
pip install -e ".[dev]"Prerequisites — Python 3.11+, Docker 24+, Node.js 18+ (for MCP servers). Full table: Quickstart → Prerequisites.
Use the Agent Architect (/agent-build)
/agent-build is a Claude Code skill that acts as an AI Agent Architect. Run it inside Claude Code at the root of any directory where you want to scaffold a new agent project.
It supports two paths:
- Fast Path — you know your stack. Six quick questions, then scaffold.
- Advisory Path — you describe your use case. It recommends the best framework, model, RAG, memory, MCP/A2A, deployment, and eval setup — with reasoning — before scaffolding begins.
Fast Path
$ /agent-build
Do you already know your stack, or would you like me to recommend?
(a) I know my stack — I'll ask 6 quick questions and scaffold your project
(b) Recommend for me — ...
> a
What should we call this agent?
> support-agent
What will this agent do?
> Handle tier-1 customer support tickets
Which framework?
1. LangGraph 2. CrewAI 3. Claude SDK 4. OpenAI Agents 5. Google ADK 6. Custom
> 1
Where will it run?
1. Local 2. AWS 3. GCP 4. Azure 5. Kubernetes
> 2
What tools should this agent have?
> zendesk lookup, knowledge base search
Team name and owner email? [engineering / you@company.com]
> (enter)
┌─────────────────────────────────────┐
│ Framework LangGraph │
│ Cloud AWS (ECS Fargate) │
│ Model gpt-4o │
│ Tools zendesk, kb-search │
│ Team engineering │
└─────────────────────────────────────┘
Look good? I'll generate your project. > yes
✓ 10 files generated in support-agent/Advisory Path
$ /agent-build
> b
What problem does this agent solve, and for whom?
> Reduce tier-1 support tickets for our SaaS by deflecting common questions
What does the agent need to do, step by step?
> User sends ticket → search knowledge base → look up order status →
respond if found, escalate to human if not
Does your agent need: (a) loops/retries (b) checkpoints (c) human-in-the-loop
(d) parallel branches (e) none
> a, c
Primary cloud provider? (a) AWS (b) GCP (c) Azure (d) Local
Language preference? (a) Python (b) TypeScript (c) No preference
> a a
What data does this agent work with?
(a) Unstructured docs (b) Structured DB (c) Knowledge graph
(d) Live APIs (e) None
> a, d
Traffic pattern?
(a) Real-time interactive (b) Async batch
(c) Event-driven (d) Internal/low-volume
> a
── Recommendations ───────────────────────────────
Framework LangGraph — Full Code
Model claude-sonnet-4-6
RAG Vector (pgvector)
Memory Short-term (Redis)
MCP MCP servers
Deploy ECS Fargate
Evals deflection-rate, CSAT, escalation-rate
Override anything, or proceed? > proceed
✓ 19 files generated in support-agent/What gets generated
| File / Directory | Purpose | Path |
|---|---|---|
agent.yaml | AgentBreeder config — framework, model, deploy, tools, guardrails | Both paths |
agent.py | Framework entrypoint | Both paths |
tools/ | Tool stub files, one per tool named in the interview | Both paths |
requirements.txt | Framework + provider dependencies | Both paths |
.env.example | Required API keys and env vars | Both paths |
Dockerfile | Multi-stage container image | Both paths |
deploy/ | docker-compose.yml or cloud deploy config | Both paths |
criteria.md | Eval criteria | Both paths |
README.md | Project overview + quick-start | Both paths |
memory/ | Redis / PostgreSQL setup | Advisory (if recommended) |
rag/ | Vector or Graph RAG index + ingestion scripts | Advisory (if recommended) |
mcp/servers.yaml | MCP server references | Advisory (if recommended) |
tests/evals/ | Eval harness + use-case criteria | Advisory |
ARCHITECT_NOTES.md | Reasoning behind every recommendation | Advisory |
CLAUDE.md | Agent-specific Claude Code context | Advisory |
AGENTS.md | AI skill roster for iterating on this agent | Advisory |
.cursorrules | Framework-specific Cursor IDE rules | Advisory |
.antigravity.md | Hard constraints for this agent | Advisory |
After scaffolding
cd support-agent/
agentbreeder validate
agentbreeder deploy --target local
agentbreeder chatDeploy to Different Targets
Local (Docker Compose)
agentbreeder deploy agent.yaml --target localNo cloud credentials needed. Starts a Docker Compose stack on your machine.
GCP Cloud Run
gcloud auth login
gcloud config set project my-project
agentbreeder deploy agent.yaml --target cloud-run --region us-central1deploy:
cloud: gcp
region: us-central1
scaling:
min: 0 # scale to zero when idle
max: 10
secrets:
- OPENAI_API_KEY # must exist in GCP Secret ManagerAWS ECS Fargate
export AWS_ACCOUNT_ID=123456789012
export AWS_REGION=us-east-1
agentbreeder deploy agent.yaml --target ecs-fargate --region us-east-1deploy:
cloud: aws
runtime: ecs-fargate
region: us-east-1
env_vars:
AWS_ACCOUNT_ID: "123456789012"
AWS_REGION: us-east-1
secrets:
- OPENAI_API_KEY # must exist in AWS Secrets ManagerAWS App Runner (serverless — no VPC or ALB needed)
agentbreeder deploy agent.yaml --target app-runner --region us-east-1Azure Container Apps
agentbreeder deploy agent.yaml --target container-appsKubernetes (EKS / GKE / AKS / self-hosted)
# ensure kubectl context points at the target cluster
agentbreeder deploy agent.yaml --target kubernetesAnthropic Claude Managed Agents
agentbreeder deploy agent.yaml --target claude-manageddeploy:
cloud: claude-managed
secrets:
- ANTHROPIC_API_KEY
claude_managed:
environment:
networking: unrestricted
tools:
- type: agent_toolset_20260401Use Different Frameworks
LangGraph
framework: langgraph
model:
primary: gpt-4o# agent.py
from langgraph.graph import StateGraph, START, END
graph = StateGraph(State)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
app = graph.compile()OpenAI Agents
framework: openai_agents
model:
primary: gpt-4o# agent.py
from agents import Agent, Runner
agent = Agent(name="support-agent", instructions="You are a helpful assistant.")
result = Runner.run_sync(agent, "Hello!")Claude SDK
framework: claude_sdk
model:
primary: claude-sonnet-4-6# agent.py
import anthropic
client = anthropic.AsyncAnthropic()
agent = client # AgentBreeder discovers it automaticallyAdaptive thinking
claude_sdk:
thinking:
type: adaptive # activates thinking when beneficial
effort: high # "low" | "medium" | "high"Prompt caching
claude_sdk:
prompt_caching: true # cache system prompts ≥8 192 chars (Sonnet)Google ADK
framework: google_adk
model:
primary: gemini-2.0-flash# agent.py
from google.adk.agents import LlmAgent
root_agent = LlmAgent(
name="my-agent",
model="gemini-2.0-flash",
instruction="You are a helpful assistant.",
)Export as root_agent, agent, or app.
Vertex AI session + memory backends
google_adk:
session_backend: vertex_ai
memory_service: vertex_ai_bankPostgreSQL session storage (non-GCP)
google_adk:
session_backend: database
session_db_url: "" # falls back to DATABASE_URL env varCrewAI
framework: crewai
model:
primary: claude-sonnet-4-6# crew.py
from crewai import Agent, Crew, Task
researcher = Agent(role="Researcher", goal="Research the topic", backstory="...")
writer = Agent(role="Writer", goal="Write the report", backstory="...")
crew = Crew(
agents=[researcher, writer],
tasks=[Task(description="Research {topic}", agent=researcher),
Task(description="Write a report on {topic}", agent=writer)],
)AGENT_MODEL and AGENT_TEMPERATURE are auto-injected from the top-level model: block.
Hierarchical process
from crewai import Process
crew = Crew(
agents=[analyst],
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
)Custom (bring your own)
framework: custom
model:
primary: any-model# agent.py
def run(user_message: str) -> str:
return "response"Stream Agent Responses
All deployed agents expose a /stream endpoint returning Server-Sent Events.
curl
curl -N -X POST https://<agent-endpoint>/stream \
-H "Content-Type: application/json" \
-d '{"input": "Write a report on renewable energy"}'JavaScript
const response = await fetch("https://<agent-endpoint>/stream", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ input: "Write a report on renewable energy" }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
for (const line of decoder.decode(value).split("\n")) {
if (line.startsWith("data: ")) {
const data = line.slice(6);
if (data === "[DONE]") break;
const event = JSON.parse(data);
if (event.text) process.stdout.write(event.text);
}
}
}Python
import httpx, json
with httpx.stream("POST", "https://<agent-endpoint>/stream",
json={"input": "Write a report"}) as r:
for line in r.iter_lines():
if line.startswith("data: "):
data = line[6:]
if data == "[DONE]": break
event = json.loads(data)
if "text" in event:
print(event["text"], end="", flush=True)SSE event format by framework
| Framework | Event type | Payload |
|---|---|---|
| Claude SDK | data: | {"text": "..."} — one event per chunk |
| CrewAI (step) | event: step | {"description": "...", "result": "..."} |
| CrewAI (final) | event: result | {"output": "..."} |
| Google ADK | data: | {"text": "...", "is_final": false} |
| All | data: | [DONE] — end of stream |
Use Local Models with Ollama
brew install ollama
ollama serve &
ollama pull gemma4 # or: llama3, mistral, phi4# agent.yaml
model:
primary: ollama/gemma4
gateway: ollamaagentbreeder deploy agent.yaml --target localNo data leaves your machine.
Configure LLM Providers
agentbreeder provider add openai --api-key sk-...
agentbreeder provider add anthropic --api-key sk-ant-...
agentbreeder provider add google --credentials-file sa.json
agentbreeder provider add ollama --base-url http://localhost:11434
agentbreeder provider listFallback chains
model:
primary: claude-sonnet-4
fallback: gpt-4o
gateway: litellm # 100+ models via LiteLLMManage Secrets
Four backends — agents reference secrets by name regardless of backend.
# .env
OPENAI_API_KEY=sk-...deploy:
secrets:
- OPENAI_API_KEYagentbreeder secret set OPENAI_API_KEY --backend aws --value sk-...
agentbreeder secret list --backend awsagentbreeder secret set OPENAI_API_KEY --backend gcp --value sk-...agentbreeder secret set OPENAI_API_KEY --backend vault --value sk-...Orchestrate Multiple Agents
Strategy overview
| Strategy | Use case |
|---|---|
router | Classify request, route to the right agent |
sequential | Agents run in order, passing state |
parallel | All agents run simultaneously |
hierarchical | Manager delegates to workers |
supervisor | Supervisor reviews and corrects |
fan_out_fan_in | Fan out to workers, aggregate results |
Router pipeline (orchestration.yaml)
name: support-pipeline
version: "1.0.0"
team: customer-success
strategy: router
agents:
triage:
ref: agents/triage-agent
routes:
- condition: billing target: billing
- condition: technical target: technical
- condition: default target: general
billing: ref: agents/billing-agent
technical: ref: agents/technical-agent
general: ref: agents/general-agent
shared_state:
type: session_context
backend: redis
deploy:
target: localagentbreeder orchestration validate orchestration.yaml
agentbreeder orchestration deploy orchestration.yaml
agentbreeder orchestration chat support-pipelineProgrammatic (Full Code SDK)
from agenthub import Orchestration
pipeline = (
Orchestration("support-pipeline", strategy="router", team="eng")
.add_agent("triage", ref="agents/triage-agent")
.add_agent("billing", ref="agents/billing-agent")
.add_agent("general", ref="agents/general-agent")
.with_route("triage", condition="billing", target="billing")
.with_route("triage", condition="default", target="general")
.with_shared_state(state_type="session_context", backend="redis")
)
pipeline.deploy()Use the Python SDK
See Full Code → for the complete SDK reference. Quick example:
from agenthub import Agent
agent = (
Agent("support-agent", version="1.0.0", team="engineering")
.with_model(primary="claude-sonnet-4", fallback="gpt-4o")
.with_tools(["tools/zendesk-mcp", "tools/order-lookup"])
.with_prompts(system="prompts/support-system-v3")
.with_deploy(cloud="gcp", min_scale=1, max_scale=10)
)
agent.to_yaml("agent.yaml") # export
agent.deploy() # or deploy directlyMigrate from Another Framework
Wrap existing agent code in agent.yaml without rewriting it.
| From | Set framework: | Guide |
|---|---|---|
| LangGraph | langgraph | FROM_LANGGRAPH.md |
| OpenAI Agents | openai_agents | FROM_OPENAI_AGENTS.md |
| CrewAI | crewai | FROM_CREWAI.md |
| AutoGen | custom | FROM_AUTOGEN.md |
| Custom code | custom | FROM_CUSTOM.md |
Eject from YAML to Full Code
agentbreeder eject my-agent --to code # YAML → Python/TypeScript SDK
agentbreeder eject my-agent --to yaml # No Code dashboard export → YAMLYour original agent.yaml is preserved. Tier mobility: No Code → Low Code → Full Code — no lock-in.
Use MCP Servers
# agent.yaml
tools:
- ref: mcp-servers/zendesk
- ref: mcp-servers/slackagentbreeder scan # auto-discover MCP servers
agentbreeder list tools # list registered serversSee MCP Servers → for the full lifecycle (build, register, test, deploy).
Manage Teams and RBAC
team: customer-success
owner: alice@company.com
access:
visibility: team # public | team | private
allowed_callers:
- team:engineering
- team:customer-success
require_approval: false # true = deploys need admin approvalAt deploy time: RBAC is checked → cost attributed to team → audit entry written. There is no way to bypass this.
Track Costs
agentbreeder list costs --group-by team
agentbreeder list costs --group-by agent
agentbreeder list costs --group-by modelEvery LLM call is logged automatically with token counts, model, team, agent, and timestamp.
Use the Git Workflow
agentbreeder submit agent.yaml --title "Update support agent prompt"
agentbreeder review list
agentbreeder review approve 42
agentbreeder review reject 42 --comment "Needs guardrails for PII"
agentbreeder publish 42 # merge, bump version, update registryRun Evaluations
agentbreeder eval run --agent support-agent --dataset golden-test-cases.json
agentbreeder eval results --agent support-agentUse Agent Templates
agentbreeder template list
agentbreeder template use customer-support --name my-support-agent
agentbreeder template create --from agent.yaml --name "My Template"Teardown a Deployed Agent
agentbreeder teardown support-agent # with confirmation
agentbreeder teardown support-agent --force # no confirmationStops the agent, removes the container, archives the registry entry. Audit log is preserved.
Use AgentBreeder in CI/CD
GitHub Actions
# .github/workflows/deploy-agent.yml
name: Deploy Agent
on:
push:
paths: ['agents/support-agent/**']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate
run: docker run --rm -v $PWD:/work -w /work rajits/agentbreeder-cli validate agents/support-agent/agent.yaml
- name: Deploy
run: docker run --rm -v $PWD:/work -w /work -e GOOGLE_APPLICATION_CREDENTIALS=/work/sa.json rajits/agentbreeder-cli deploy agents/support-agent/agent.yaml --target cloud-runGitLab CI
deploy-agent:
image: rajits/agentbreeder-cli:latest
script:
- agentbreeder validate agents/support-agent/agent.yaml
- agentbreeder deploy agents/support-agent/agent.yaml --target cloud-runRun the Platform with Docker Compose
# Standalone — pre-built images, no repo clone needed
curl -O https://raw.githubusercontent.com/agentbreeder/agentbreeder/main/deploy/docker-compose.standalone.yml
docker compose -f docker-compose.standalone.yml up -d
# From source
docker compose -f deploy/docker-compose.yml up -d| Service | URL |
|---|---|
| Dashboard | http://localhost:3001 |
| API | http://localhost:8000 |
| API Docs | http://localhost:8000/docs |
Default login: admin@agentbreeder.local / plant
Warning
Change the default password before exposing this to a network.
Troubleshooting
"agentbreeder: command not found"
pip show agentbreeder # check if installed
python -m cli.main --help # if installed but not in PATH
pip install agentbreeder # reinstall"Validation failed: unknown framework"
Supported values: langgraph, openai_agents, claude_sdk, crewai, google_adk, custom
"RBAC check failed"
The deploying user must belong to the team in agent.yaml. Check team: matches your team membership.
"Container build failed"
docker info # check Docker is running
agentbreeder deploy agent.yaml --target local --dry-run # inspect generated Dockerfile"Deploy rolled back"
The 8-step pipeline is atomic — any failure rolls back everything. Check which step failed:
agentbreeder status my-agent
agentbreeder logs my-agentDashboard won't start
The dashboard needs the API running first:
docker run -d -p 8000:8000 rajits/agentbreeder-api
docker run -d -p 3001:3001 rajits/agentbreeder-dashboard