agentbreeder

Providers

How to add OpenAI-compatible LLM providers (Nvidia NIM, Groq, Together, Fireworks, OpenRouter, …) to AgentBreeder.

Providers

AgentBreeder ships with a catalog of OpenAI-compatible providers so you can swap LLM backends with one line in agent.yaml. Most "new" providers (Nvidia NIM, Groq, Together, Fireworks, DeepInfra, Cerebras, Hyperbolic, Moonshot/Kimi, OpenRouter, …) speak the OpenAI Chat Completions wire format — the only differences are the base URL and the env var that holds the API key. The catalog captures both.

# agent.yaml
model:
  primary: nvidia/meta-llama-3.1-405b-instruct   # any catalog provider
  fallback: groq/mixtral-8x7b-32768              # any other catalog provider

When the engine sees <provider>/<model>, it looks <provider> up in the catalog, reads the API key from the env var declared on that entry, and constructs an OpenAI-compatible client pointed at the right URL. No new Python class required.


Built-in catalog

The presets ship in engine/providers/catalog.yaml.

ProviderBase URLAPI key env var
nvidiahttps://integrate.api.nvidia.com/v1NVIDIA_API_KEY
openrouterhttps://openrouter.ai/api/v1OPENROUTER_API_KEY
moonshothttps://api.moonshot.cn/v1MOONSHOT_API_KEY
groqhttps://api.groq.com/openai/v1GROQ_API_KEY
togetherhttps://api.together.xyz/v1TOGETHER_API_KEY
fireworkshttps://api.fireworks.ai/inference/v1FIREWORKS_API_KEY
deepinfrahttps://api.deepinfra.com/v1/openaiDEEPINFRA_API_KEY
cerebrashttps://api.cerebras.ai/v1CEREBRAS_API_KEY
hyperbolichttps://api.hyperbolic.xyz/v1HYPERBOLIC_API_KEY

List them at runtime:

agentbreeder provider list

The hand-written openai, anthropic, google, ollama, and litellm providers are unchanged. Use the catalog for everything else.


The four onboarding paths

There are exactly four ways to bring a new provider online, ordered by how much code you have to write.

Path A — Built-in preset (zero code)

If a provider ships in the catalog, just set the env var and reference it from agent.yaml:

export NVIDIA_API_KEY=nvapi-...
model:
  primary: nvidia/meta-llama-3.1-405b-instruct

Test the connection without deploying:

agentbreeder provider test nvidia

Configure from the dashboard

Prefer the UI? Open /models in the dashboard. The OpenAI-Compatible Catalog table lists every preset with a Configure button:

  1. Click Configure on the row for the provider you want to enable.
  2. Paste the API key value into the password input. The value is sent to POST /api/v1/secrets and written to your workspace's configured secrets backend (env / keychain / AWS / GCP / Vault) under the deterministic key <provider>/api-key. The plaintext value never persists in the database.
  3. The row flips to a green ✓ Configured badge. The same key is now resolvable at deploy time exactly as if you had set the env var manually.

The button is gated by RBAC — viewers see a disabled button with a "Requires deployer role" tooltip, and the API responds with 403 if a viewer's token reaches it. Deployer or admin role is required to write secrets.

The /models page also reserves tabs for Gateways (LiteLLM, OpenRouter — Track H, #164) and Local runtimes, plus a Sync button that lights up with Track G (#163). Until those tracks land they render disabled with explanatory tooltips.

Path B — User-local entry (no upstream change)

Adding a private/internal provider — a self-hosted vLLM, an internal mirror, a partner endpoint — uses the user-local catalog at ~/.agentbreeder/providers.local.yaml. User-local entries take precedence over built-in ones with the same name (so you can override nvidia to point at an internal proxy).

agentbreeder provider add my-vllm \
  --type openai_compatible \
  --base-url https://vllm.internal.company.com/v1 \
  --api-key-env COMPANY_VLLM_KEY

export COMPANY_VLLM_KEY=...
agentbreeder provider test my-vllm
# agent.yaml
model:
  primary: my-vllm/llama-3.1-70b-instruct

Remove it later:

agentbreeder provider remove my-vllm

The user-local YAML is created with 0600 permissions, but it does not contain secrets — only env-var names. The actual API key lives in your environment, never on disk.

Path C — Promote a user-local entry to a built-in preset

If your user-local entry would help others (e.g. a public provider AgentBreeder doesn't ship yet), promote it:

agentbreeder provider publish my-vllm

The CLI prints the YAML snippet to append to engine/providers/catalog.yaml and explains how to open the PR.

Automatic PR opening is on the roadmap — for now, the command prints the patch and exits with code 2. Copy the snippet into a PR by hand against agentbreeder/agentbreeder.

Path D — Truly novel API shape (rare)

If a provider speaks something other than OpenAI Chat Completions (e.g. a brand-new RPC dialect), you need a new Python class under engine/providers/. Open a PR with:

  1. engine/providers/<name>_provider.py implementing ProviderBase
  2. A new ProviderType enum value
  3. Registration in _PROVIDER_CLASSES in engine/providers/registry.py
  4. Unit tests against mocked HTTP

Most "new" providers turn out to be OpenAI-compatible underneath, so reach for Paths A–C first.


CLI reference

agentbreeder provider list                # built-in + user-local presets
agentbreeder provider add <name> --type openai_compatible \
  --base-url URL --api-key-env ENV         # user-local entry
agentbreeder provider remove <name>       # remove user-local entry
agentbreeder provider test <name>         # GET /models against the endpoint
agentbreeder provider publish <name>      # print PR patch for upstream

For interactive setup of the legacy hand-written providers (openai, anthropic, google, ollama, litellm), use the existing agentbreeder provider add openai flow — no --type flag.


How resolution works

When the deploy pipeline parses model.primary: nvidia/<model>, the resolver:

  1. Splits on the first / into (provider, model).
  2. Looks provider up in the merged catalog (built-in + user-local).
  3. If found, constructs an OpenAICompatibleProvider with base_url from the catalog and the API key from the entry's api_key_env.
  4. If the env var is unset at deploy time, the deploy fails fast with a clear "set <ENV_VAR>" message.

Refs that don't match a catalog entry (gpt-4o, claude-sonnet-4, …) fall through to the existing resolver path — the catalog is purely additive.


Special: OpenRouter headers

OpenRouter requires HTTP-Referer and X-Title headers on every request. The catalog entry handles this automatically:

# engine/providers/catalog.yaml (excerpt)
openrouter:
  type: openai_compatible
  base_url: https://openrouter.ai/api/v1
  api_key_env: OPENROUTER_API_KEY
  default_headers:
    HTTP-Referer: https://agentbreeder.io
    X-Title: AgentBreeder

When you add a user-local entry that also needs custom headers, edit ~/.agentbreeder/providers.local.yaml directly — the CLI doesn't expose --default-headers yet (filed for follow-up).


Model lifecycle (Track G — issue #163)

Once a provider has an api-key configured, AgentBreeder can auto-discover the models that provider exposes and track them through their full lifecycle.

Lifecycle states

Every row in the models table has a status:

StatusMeaning
activeProvider currently lists this model. Default for new entries.
betaOperator-marked. Reserved for early-access model lines.
deprecatedEither the upstream provider stopped listing the model, or an operator manually deprecated it. Still callable, but the dashboard surfaces a warning.
retiredHas been deprecated for ≥ 30 days without re-appearing in any sync. The row is preserved for audit + cost-attribution; calls fail fast.

Sync — three ways

# CLI — sync every configured provider
agentbreeder model sync

# CLI — restrict to one provider
agentbreeder model sync --provider nvidia

# CLI — operator override
agentbreeder model deprecate gpt-3.5-turbo --replacement gpt-4o-mini

# Dashboard — the Sync button on /models triggers the same flow

The API endpoint is POST /api/v1/models/sync (deployer role required). It accepts an optional {providers: ["openai", "groq", ...]} body; with an empty body, every provider that has an api-key in the environment (catalog presets + first-class Anthropic/Google/OpenAI + DB-configured providers with a base_url) is synced.

Diff rules

For each provider, the lifecycle service:

  1. Queries discovery (/models for OpenAI-compatible, /v1beta/models?key=… for Google, hard-coded list for Anthropic).
  2. Loads every existing row in the registry for that provider.
  3. New model in the discovery payload → insert with status="active", set discovered_at and last_seen_at, emit model.added audit event.
  4. Still-present model → bump last_seen_at. If it was deprecated/retired and re-appeared, flip back to active (provider un-retired it).
  5. Absent model with status="active" → flip to deprecated, set deprecated_at, emit model.deprecated.
  6. Absent model already deprecated for ≥ 30 days → flip to retired, emit model.retired.

A discovery failure (network down, bad api-key) is isolated per provider — the failing adapter records the error on its SyncProviderResult and the others run normally. This prevents a transient outage at one provider from cascading into a mass deprecation across the registry.

Audit events

Every state transition writes to the audit log under resource_type="model":

ActionTriggered by
model.addedNew model appears in a sync.
model.deprecatedFirst sync that no longer lists the model, or agentbreeder model deprecate.
model.retiredSync that observes ≥ 30 days of continuous absence.

The model.deprecated event includes a reason field (absent_from_discovery for the auto path, manual for the CLI/API override).

Daily cron — out of scope for this PR

The current PR ships the sync endpoint + CLI; running it on a schedule lands with the cloud workspaces track. For now, the simplest approach is a system cron in your workspace:

# /etc/cron.d/agentbreeder-model-sync
0 4 * * * agentbreeder model sync >> /var/log/agentbreeder-sync.log 2>&1

TODO: Cloud workspaces will get a managed daily cron once the workspace primitive lands (Track A). See registry/model_lifecycle.py for the service the cron will call.

Anthropic — curated list

Anthropic does not expose a public /models endpoint, so AgentBreeder ships a hard-coded list at engine/providers/discovery.py::ANTHROPIC_CURATED_MODELS. Update this constant when Anthropic releases new models. The lifecycle diff still runs against it — adding a model to the curated list emits a model.added event on the next sync.


See also

On this page