LLM Providers
Sinaptic® DROID+ supports 7 LLM providers out of the box — 4 cloud and 3 local. All providers are available in every edition (Community, Pro, Enterprise).
Supported Providers
| Provider | Type | API Key Required | Default Base URL |
|---|---|---|---|
| OpenAI | Cloud | Yes | https://api.openai.com/v1 |
| Anthropic | Cloud | Yes | https://api.anthropic.com |
| Google Gemini | Cloud | Yes | https://generativelanguage.googleapis.com/v1beta |
| Grok (xAI) | Cloud | Yes | https://api.x.ai/v1 |
| Ollama | Local | No | http://localhost:11434/v1 |
| LM Studio | Local | No | http://localhost:1234/v1 |
| llama.cpp | Local | No | http://localhost:8080/v1 |
How It Works
Sinaptic® DROID+ exposes an OpenAI-compatible API to your clients. Internally, it translates requests to the appropriate provider format. This means:
- Your client code uses the standard OpenAI SDK regardless of which backend model is running
- You can switch between providers by changing a YAML config — no code changes
- Different agents can use different providers simultaneously
Client (OpenAI SDK) → DROID+ API → [OpenAI | Anthropic | Gemini | Grok | Ollama | ...]
Configuration
Set the primary provider in droid.yaml:
llm:
provider: "openai" # Default provider
api_key: "${OPENAI_API_KEY}"
default_model: "gpt-4o-mini"
Configure additional providers alongside:
anthropic:
api_key: "${ANTHROPIC_API_KEY}"
gemini:
api_key: "${GEMINI_API_KEY}"
ollama:
base_url: "http://localhost:11434/v1"
Per-Agent Model Selection
Each agent can use any configured provider and model:
# Agent using OpenAI
name: "fast-agent"
model:
name: "gpt-4o-mini"
# Agent using Anthropic
name: "smart-agent"
model:
provider: "anthropic"
name: "claude-sonnet-4-20250514"
# Agent using local Ollama
name: "private-agent"
model:
provider: "ollama"
name: "llama3.2"
All three agents run in the same Sinaptic® DROID+ instance and are accessible via the same API endpoint.
Choosing a Provider
| Use case | Recommended |
|---|---|
| Getting started quickly | OpenAI (gpt-4o-mini) |
| Best reasoning quality | Anthropic (claude-sonnet-4-20250514) or OpenAI (gpt-4o) |
| Free cloud API | Google Gemini (gemini-2.0-flash) |
| Full privacy (no cloud) | Ollama with llama3.2 |
| Desktop GUI for local models | LM Studio |
| Minimal overhead local inference | llama.cpp |