Polyglot Agents — TypeScript, Go, Python
Build and deploy AgentBreeder agents in TypeScript, Go, or Python. Full RAG, memory, tools, and A2A parity — wired by the platform, not written by you.
Polyglot Agents
AgentBreeder agents can be written in Python, TypeScript, or Go today. Every language gets the same first-class experience: automatic RAG injection, conversation memory, tool execution, A2A communication, tracing, and cost attribution — all wired by the platform, none of it written by you.
Language status (v2.0)
- Python — Tier 1, shipped (LangGraph, CrewAI, Claude SDK, OpenAI Agents, Google ADK, Custom).
- TypeScript / Node.js — Tier 1, shipped (Vercel AI SDK, Mastra, LangChain.js, OpenAI Agents TS, Custom).
- Go — Tier 2 SDK shipped in v2.0 — see Go SDK.
- Kotlin/JVM, Rust, .NET — roadmap. Tracked at #188, #189, #190. The examples below show the planned shape; the
--language rust/--language kotlin/--language csharpflags are reserved but not yet wired.
How It Works
When you run agentbreeder deploy, the engine:
- Reads your
runtime:block to select the right build system - Injects a framework-specific server template alongside your agent code
- Builds a container image (Node 20, Rust 1.77, Go 1.22, or Python 3.11)
- Injects an AgentBreeder Platform Sidecar (APS) next to your container
- Deploys both containers together — same pod, same task, same compose network
The APS sidecar handles everything the platform needs to do: RAG retrieval, memory persistence, tool execution, A2A proxying, and OpenTelemetry tracing. LLM cost is tracked automatically by the LiteLLM gateway using the per-agent virtual key — no explicit cost recording call needed. Your agent code calls the sidecar over a local HTTP connection on port 9001. You write zero infrastructure code.
APS vs. Track J sidecar
This page describes the Polyglot APS sidecar (TypeScript-runtime concept from PR #136), which lives in the agent container's pod and exposes :9001. AgentBreeder v2.0 also ships a separate, broader Track J sidecar — a Go binary that fronts the agent on :8080 and handles bearer auth, OTel, cost emission, A2A, MCP passthrough, and guardrails for all languages and frameworks. The two are complementary: APS is opinionated polyglot scaffolding; Track J is the cross-cutting concerns layer auto-injected based on agent.yaml (guardrails: / MCP tools: / a2a:). Plan: the APS responsibilities collapse into the Track J sidecar over the v2.x line.
Your agent.ts / main.rs / main.go
↓ (imported by server template)
Agent container ←→ APS sidecar ←→ Platform (RAG, memory, tools, registry)
:8080 :9001Supported Languages and Frameworks
| Language | Frameworks | Status |
|---|---|---|
| Python | LangGraph, CrewAI, Claude SDK, OpenAI Agents, Google ADK, Custom | Tier 1 — shipped |
| TypeScript / Node.js | Vercel AI SDK, Mastra, LangChain.js, OpenAI Agents TS, Custom | Tier 1 — shipped |
| Go | Custom (with the Go SDK) | Tier 2 — shipped in v2.0 |
| Kotlin / JVM | langchain4j, spring_ai, koog, anthropic_java_sdk | Roadmap — #188 |
| Rust | rig, swiftide, anthropic_rust_sdk | Roadmap — #189 |
| C# / .NET | semantic_kernel, autogen_net, anthropic_dotnet_sdk | Roadmap — #190 |
No Python in TypeScript/Rust/Go stacks. The Node.js APS (Phase 1) and Go binary APS (Phase 2) contain no Python. A pure TypeScript stack is containers all the way down.
Quickstart
1. Scaffold a new agent
agentbreeder init --language node --framework vercel-ai --name my-agent
cd my-agentComing soon (v2.x) — Rust scaffolding is tracked at #189. The shape below is the planned interface; until then implement Runtime Contract v1 by hand and declare
framework: custom,language: rust(Tier-3 BYO).
# Once #189 ships:
agentbreeder init --lang rust --framework rig --name my-agent
cd my-agentagentbreeder init --lang go --framework custom my-agent
cd my-agentSee the dedicated Go SDK page for the full surface.
agentbreeder init --language python --framework langgraph --name my-agent
cd my-agent2. Look at what was generated
my-agent/
├── agent.yaml ← platform config
├── agent.ts ← your agent logic (this is all you write)
├── package.json ← vercel-ai + @agentbreeder/aps-client pre-included
├── tsconfig.json
├── .env.example
└── tests/
└── agent.test.tsmy-agent/
├── agent.yaml
├── src/
│ └── main.rs ← your agent logic
├── Cargo.toml ← rig + agentbreeder-aps pre-included
└── tests/
└── integration_test.rsmy-agent/
├── agent.yaml
├── agent.go ← your agent logic
├── go.mod ← agentbreeder/aps-client pre-included
└── tests/
└── agent_test.go3. Write your agent logic
This is the only file you need to edit:
// agent.ts
import { openai } from '@ai-sdk/openai'
export const model = openai('gpt-4o')
export const systemPrompt = `
You are a helpful assistant for ${process.env.AGENT_NAME}.
Answer concisely and accurately.
`
// Optional: define additional tools beyond what AgentBreeder injects
export const tools = {}// agent.ts
import { Agent } from '@mastra/core'
import { openai } from '@mastra/openai'
export const agent = new Agent({
name: process.env.AGENT_NAME!,
instructions: 'You are a helpful assistant.',
model: openai('gpt-4o'),
})// agent.ts
import { ChatOpenAI } from '@langchain/openai'
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents'
export async function createAgent(tools: any[], systemPrompt: string) {
const llm = new ChatOpenAI({ model: 'gpt-4o' })
const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt: systemPrompt })
return AgentExecutor.fromAgentAndTools({ agent, tools })
}// agent.ts
import { Agent } from 'openai-agents'
export const agent = new Agent({
name: process.env.AGENT_NAME!,
instructions: 'You are a helpful assistant.',
model: 'gpt-4o',
})4. Configure agent.yaml
name: my-agent
version: 1.0.0
team: engineering
owner: you@company.com
runtime:
language: node
framework: vercel-ai
version: "20"
model:
primary: gpt-4o
# Optional: wire a knowledge base
knowledge_bases:
- ref: kb/product-docs
# Optional: wire MCP tools
tools:
- ref: tools/my-mcp-server
deploy:
cloud: local5. Deploy
agentbreeder deploy
# → builds container, injects APS sidecar, starts locally
# → agent live at http://localhost:8080The runtime: Block Reference
runtime:
language: node # Required. Enum: python | node | rust | go
framework: vercel-ai # Required. See supported frameworks table above.
version: "20" # Optional. Language runtime version.
entrypoint: agent.ts # Optional. Default: agent.ts (node) | agent.py (python)
# main.rs (rust) | agent.go (go)Python agents don't need to migrate. framework: langgraph continues to work unchanged. The runtime: block is only needed for non-Python agents or when you want to explicitly pin the Python version.
RAG (Knowledge Bases)
RAG works identically across all languages. Define knowledge bases in agent.yaml — the platform wires retrieval automatically via the APS sidecar. You don't write any retrieval code.
# agent.yaml
knowledge_bases:
- ref: kb/product-docs
- ref: kb/support-historyAt runtime, the server template:
- Calls
GET aps/rag/search?query={userMessage}&index_ids={kbIds}&top_k=5 - Receives ranked chunks from ChromaDB or Neo4j
- Prepends the context to the system prompt before calling the LLM
Context injection pattern per framework:
| Framework | How context is injected |
|---|---|
| Vercel AI SDK | Prepended to system in generateText() / streamText() |
| Mastra | Added to agent's context window |
| LangChain.js | Passed as retrieval result to chain |
| OpenAI Agents TS | Prepended to system instructions |
| Rig (Rust) | Added to agent preamble |
| Go custom | Prepended to system message |
No code needed in your agent.ts. If a knowledge base is wired in agent.yaml, context is injected automatically on every /invoke.
Memory (Conversation History)
Conversation history persists automatically across calls using the thread_id parameter.
# First call — creates a new thread
curl -X POST http://localhost:8080/invoke \
-d '{"input": {"messages": [{"role": "user", "content": "My name is Alice"}]}}'
# → { "output": "Hello Alice!", "thread_id": "abc-123" }
# Second call — same thread, agent remembers Alice
curl -X POST http://localhost:8080/invoke \
-d '{"input": {"messages": [{"role": "user", "content": "What is my name?"}]},
"thread_id": "abc-123"}'
# → { "output": "Your name is Alice.", "thread_id": "abc-123" }Configure the memory backend in agent.yaml:
memory:
- ref: memory/redis-store # Redis (default for local)
# or
- ref: memory/postgres-store # PostgreSQL (recommended for production)Tools and MCP Servers
Tools registered in the platform registry are injected automatically — no code needed.
# agent.yaml
tools:
- ref: tools/zendesk-mcp
- ref: tools/order-lookup
- ref: tools/web-searchAt runtime, the APS sidecar exposes these tools via GET aps/tools/list. The server template passes them to the LLM in the framework's native tool format. When the LLM calls a tool, the template calls POST aps/tools/execute — the sidecar handles MCP protocol, retries, and error formatting.
Adding inline tools (beyond the registry):
// agent.ts
import { tool } from 'ai'
import { z } from 'zod'
export const tools = {
get_weather: tool({
description: 'Get current weather for a city',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
return fetch(`https://weather-api.com/${city}`).then(r => r.json())
}
})
}// main.rs — additional tools beyond the registry
pub fn extra_tools() -> Vec<Tool> {
vec![
Tool {
name: "get_weather",
description: "Get current weather for a city",
execute: |args| async { weather_api::get(&args["city"]).await },
}
]
}// agent.go
func ExtraTools() []agentbreeder.Tool {
return []agentbreeder.Tool{
{
Name: "get_weather",
Description: "Get current weather for a city",
Execute: func(ctx context.Context, args map[string]any) (any, error) {
return weatherAPI.Get(ctx, args["city"].(string))
},
},
}
}Agent-to-Agent (A2A) Calls
Agents written in different languages can call each other. The APS sidecar handles routing and auth — language is transparent.
# agent.yaml — wire a sub-agent as a tool
tools:
- name: call_data_analyst
type: a2a
target: data-analysis-agent # can be Python, TypeScript, Rust, GoThe sub-agent call routes through the APS:
// Happens automatically — your agent.ts doesn't need this
const result = await aps.a2a.call('data-analysis-agent', { input: userQuery })See A2A Protocol for multi-agent orchestration patterns.
MCP Server Authoring
Write and deploy MCP servers in any language using the same workflow as agents.
agentbreeder init --type mcp-server --language node --name my-tools
agentbreeder init --type mcp-server --language go --name my-tools
agentbreeder init --type mcp-server --language rust --name my-toolsmcp-server.yaml:
name: my-search-tools
version: 1.0.0
type: mcp-server
runtime:
language: node
framework: mcp-ts
version: "20"
transport: http
tools:
- name: search_web
description: "Search the web for current information"
schema:
type: object
properties:
query: { type: string }
max_results: { type: integer, default: 10 }
required: [query]Write your tool implementations — that's the only file you need:
// tools.ts
export async function search_web({ query, max_results = 10 }: SearchArgs) {
const results = await fetch(`https://api.search.com?q=${query}&n=${max_results}`)
return results.json()
}
export async function summarize({ text }: SummarizeArgs) {
// your implementation
}Deploy and register:
agentbreeder deploy
# MCP server registered in registry as tools/my-search-tools
# Reference from any agent: tools: [{ ref: tools/my-search-tools }]Streaming
All language templates support streaming via SSE on POST /stream:
curl -X POST http://localhost:8080/stream \
-H "Accept: text/event-stream" \
-d '{"input": {"messages": [{"role": "user", "content": "Write a poem"}]}}'data: {"token": "Roses"}
data: {"token": " are"}
data: {"token": " red"}
data: {"done": true, "thread_id": "abc-123"}Environment Variables
These are injected automatically by the deploy pipeline — you don't set them manually:
| Variable | Description |
|---|---|
AGENT_NAME | Agent name from agent.yaml |
AGENT_VERSION | Agent version |
AGENT_MODEL | Primary model from model.primary |
AGENT_SYSTEM_PROMPT | Resolved system prompt |
APS_URL | APS sidecar URL (http://localhost:9001) |
APS_TOKEN | Shared secret for APS auth |
KB_INDEX_IDS | Comma-separated knowledge base IDs |
AGENT_TOOLS_JSON | JSON array of registered tools |
Variables from deploy.env_vars and deploy.secrets in agent.yaml are also injected.
Deploying to Cloud
The deploy.cloud field works identically for all languages — the deploy pipeline is language-agnostic above the container build step.
deploy:
cloud: aws # aws | gcp | azure | kubernetes | local | claude-managed
runtime: ecs-fargate # optional — defaults per cloud
region: us-east-1
scaling:
min: 1
max: 10
resources:
cpu: "1"
memory: "2Gi"agentbreeder deploy --target awsThe APS sidecar is injected into the ECS task definition, Cloud Run job, or Kubernetes pod automatically. You don't configure it.
Framework Reference
TypeScript frameworks
| Framework | framework: value | npm package | Best for |
|---|---|---|---|
| Vercel AI SDK | vercel-ai | ai | Streaming-first apps, Next.js integration |
| Mastra | mastra | @mastra/core | Workflow-heavy agents, human-in-the-loop |
| LangChain.js | langchain-js | langchain | Teams migrating from Python LangChain |
| OpenAI Agents TS | openai-agents-ts | openai-agents | Multi-agent handoffs, OpenAI-native |
| DeepAgent | deepagent | deepagent | Reasoning-intensive tasks |
| Custom Node | custom | — | Any Node.js/TypeScript agent |
Rust frameworks (roadmap — #189)
| Framework | framework: value | crate | Best for |
|---|---|---|---|
| Rig | rig | rig-core | Performance-critical, low-latency agents |
| Custom | custom | — | Any Rust agent |
Go frameworks (shipped in v2.0 — see Go SDK)
| Framework | framework: value | module | Best for |
|---|---|---|---|
| Custom | custom | github.com/agentbreeder/agentbreeder/sdk/go/agentbreeder | Any Go agent — Tier-2 first-party SDK |
Migrating a Python Agent to TypeScript
If you have an existing Python agent and want to rewrite it in TypeScript:
- Run
agentbreeder init --language node --framework vercel-ai --name my-agent - Copy your
knowledge_bases,tools,model, anddeployblocks from the oldagent.yaml - Change
framework: langgraphtoruntime: {language: node, framework: vercel-ai} - Rewrite your agent logic in
agent.ts agentbreeder deploy— same governance, same registry entry, same endpoint pattern
Your RAG indexes, MCP servers, and memory stores are shared — they don't need to be recreated.
FAQ
Do I need to install the APS sidecar manually?
No. The deploy pipeline injects it automatically based on your runtime.language. You never configure or deploy the sidecar directly.
Can I use a TypeScript framework not in the supported list?
Yes — use framework: custom and the CustomNodeTemplate wraps your server. You get basic /health and /invoke scaffolding; APS wiring for RAG, memory, and tools is available via @agentbreeder/aps-client. Open a GitHub issue to request an official template for your framework.
Can a TypeScript agent call a Python agent?
Yes — via A2A. Wire the Python agent as a tool with type: a2a. The APS handles the cross-language call. See A2A Protocol.
Does streaming work for all languages?
Yes. All templates implement POST /stream returning SSE. Framework-specific streaming (Vercel AI's streamText, Rig's stream iterator, Go's http.Flusher) is handled inside the template.
What's the performance overhead of the APS sidecar?
The APS sidecar runs in the same pod/task as your agent — communication is localhost. Typical latency is <1ms per call. The Go binary APS (Phase 2) is a static binary with no JVM/Python startup overhead.
Can I use my own Dockerfile instead of the generated one?
Yes — use framework: custom and include a Dockerfile in your agent directory. The engine uses your Dockerfile as-is and still injects the APS sidecar.