Skip to main content

LLM Providers

Sinaptic® DROID+ supports 7 LLM providers out of the box — 4 cloud and 3 local. All providers are available in every edition (Community, Pro, Enterprise).

Supported Providers

ProviderTypeAPI Key RequiredDefault Base URL
OpenAICloudYeshttps://api.openai.com/v1
AnthropicCloudYeshttps://api.anthropic.com
Google GeminiCloudYeshttps://generativelanguage.googleapis.com/v1beta
Grok (xAI)CloudYeshttps://api.x.ai/v1
OllamaLocalNohttp://localhost:11434/v1
LM StudioLocalNohttp://localhost:1234/v1
llama.cppLocalNohttp://localhost:8080/v1

How It Works

Sinaptic® DROID+ exposes an OpenAI-compatible API to your clients. Internally, it translates requests to the appropriate provider format. This means:

  • Your client code uses the standard OpenAI SDK regardless of which backend model is running
  • You can switch between providers by changing a YAML config — no code changes
  • Different agents can use different providers simultaneously
Client (OpenAI SDK) → DROID+ API → [OpenAI | Anthropic | Gemini | Grok | Ollama | ...]

Configuration

Set the primary provider in droid.yaml:

llm:
provider: "openai" # Default provider
api_key: "${OPENAI_API_KEY}"
default_model: "gpt-4o-mini"

Configure additional providers alongside:

anthropic:
api_key: "${ANTHROPIC_API_KEY}"

gemini:
api_key: "${GEMINI_API_KEY}"

ollama:
base_url: "http://localhost:11434/v1"

Per-Agent Model Selection

Each agent can use any configured provider and model:

# Agent using OpenAI
name: "fast-agent"
model:
name: "gpt-4o-mini"

# Agent using Anthropic
name: "smart-agent"
model:
provider: "anthropic"
name: "claude-sonnet-4-20250514"

# Agent using local Ollama
name: "private-agent"
model:
provider: "ollama"
name: "llama3.2"

All three agents run in the same Sinaptic® DROID+ instance and are accessible via the same API endpoint.

Choosing a Provider

Use caseRecommended
Getting started quicklyOpenAI (gpt-4o-mini)
Best reasoning qualityAnthropic (claude-sonnet-4-20250514) or OpenAI (gpt-4o)
Free cloud APIGoogle Gemini (gemini-2.0-flash)
Full privacy (no cloud)Ollama with llama3.2
Desktop GUI for local modelsLM Studio
Minimal overhead local inferencellama.cpp