YAML Configuration Reference
Sinaptic® DROID+ uses two types of YAML configuration files:
- Main config (
droid.yaml) — global server, LLM, security, and logging settings - Agent configs (
configs/agents/*.yaml) — per-agent model, personality, tools, and behaviour
All config values support ${ENV_VAR} substitution — Sinaptic® DROID+ resolves environment variables at startup.
Main Config (droid.yaml)
edition
| Value | Description |
|---|---|
community | Free edition, max 2 agents, RegExp-only security (default) |
pro | Unlimited agents, full SinapticAI cascade |
enterprise | Multi-tenancy, compliance, HA |
server
| Field | Type | Default | Description |
|---|---|---|---|
openai_port | int | 8080 | Port for the OpenAI-compatible API |
management_port | int | 8081 | Port for the Management API |
cors_origins | []string | ["*"] | Allowed CORS origins |
api_key | string | — | API key for authentication (supports ${ENV_VAR}) |
demo_mode | bool | false | Enable demo mode |
llm
Primary LLM provider configuration.
| Field | Type | Default | Description |
|---|---|---|---|
provider | string | openai | LLM provider name (openai, anthropic, gemini, grok, ollama, lm_studio, llama_cpp) |
base_url | string | https://api.openai.com/v1 | API base URL |
api_key | string | ${OPENAI_API_KEY} | API key (supports ${ENV_VAR}) |
default_model | string | gpt-4o-mini | Default model name |
anthropic
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | https://api.anthropic.com | Anthropic API base URL |
api_key | string | ${ANTHROPIC_API_KEY} | API key |
gemini
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | https://generativelanguage.googleapis.com/v1beta | Gemini API base URL |
api_key | string | ${GEMINI_API_KEY} | API key |
grok
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | https://api.x.ai/v1 | Grok (xAI) API base URL |
api_key | string | ${GROK_API_KEY} | API key |
ollama
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | http://localhost:11434/v1 | Ollama API base URL |
lm_studio
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | http://localhost:1234/v1 | LM Studio API base URL |
llama_cpp
| Field | Type | Default | Description |
|---|---|---|---|
base_url | string | http://localhost:8080/v1 | llama.cpp server base URL |
embedding
| Field | Type | Default | Description |
|---|---|---|---|
model | string | text-embedding-3-small | Embedding model name |
base_url | string | (falls back to llm.base_url) | Embedding API base URL |
api_key | string | (falls back to llm.api_key) | Embedding API key |
sinaptic
SinapticAI security layer configuration.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable SinapticAI security scanning |
mode | string | block | Action on threat detection: block or log |
log_blocked | bool | true | Log blocked requests |
sinaptic.pii
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable PII detection |
strategy | string | mask | PII handling strategy: mask or block |
mcp
Model Context Protocol (MCP) tool server configuration.
mcp:
servers:
my-server:
command: "npx"
args: ["-y", "@my-org/mcp-server"]
env:
API_KEY: "${MY_API_KEY}"
| Field | Type | Description |
|---|---|---|
servers | map | Named MCP server definitions |
Per server:
| Field | Type | Description |
|---|---|---|
command | string | Command to launch the MCP server |
args | []string | Command arguments |
env | map[string]string | Environment variables for the server process |
logging
| Field | Type | Default | Description |
|---|---|---|---|
level | string | info | Log level (debug, info, warn, error) |
format | string | json | Log format (json or text) |
audit
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable audit logging |
log_path | string | ./logs/audit | Path for audit log files |
retention_days | int | 30 | Days to retain audit logs |
branding
| Field | Type | Default | Description |
|---|---|---|---|
show_badge | bool | (auto) | Show "Powered by Sinaptic® DROID+" badge. Auto: true for community, false for pro/enterprise |
Top-level paths
| Field | Type | Default | Description |
|---|---|---|---|
data_dir | string | ./data | Directory for runtime data (embeddings, state) |
agents_dir | string | ./configs/agents | Directory containing agent YAML configs |
Agent Config (configs/agents/*.yaml)
Top-level fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | string | yes | (filename) | Unique agent identifier |
description | string | no | — | Human-readable description |
model | object | yes | — | LLM model configuration |
personality | string | yes | — | System prompt for the agent |
agent_type | string | no | plain | Processing loop: plain or react |
execution_mode | string | no | sequential | Tool execution: sequential or parallel |
chain_of_thought | bool | no | false | Enable chain-of-thought reasoning |
max_iterations | int | no | 10 | Max ReAct loop iterations |
tool_timeout | string | no | 30s | Default timeout for tool calls |
model
| Field | Type | Default | Description |
|---|---|---|---|
name | string | — | Model name (e.g., gpt-4o, claude-sonnet-4-20250514) |
base_url | string | (from main config) | Override LLM API base URL |
api_key | string | (from main config) | Override API key |
max_tokens | int | 1024 | Maximum response tokens |
temperature | float | 0.7 | Sampling temperature (0-2) |
memory.short_term
| Field | Type | Default | Description |
|---|---|---|---|
max_messages | int | 50 | Sliding-window conversation history size |
rag
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable retrieval-augmented generation |
top_k | int | 3 | Number of chunks to retrieve |
min_score | float | 0.7 | Minimum similarity score threshold |
tools
Array of tool configurations.
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Tool identifier |
description | string | no | Tool description (shown to the LLM) |
type | string | yes | Tool type: mcp, rest_api |
url | string | rest_api | Endpoint URL |
method | string | rest_api | HTTP method (GET, POST, PUT, DELETE) |
headers | map | rest_api | HTTP headers |
timeout | string | rest_api | Request timeout (e.g., 10s) |
pipeline
Multi-LLM pipeline steps (optional). When defined, the request passes through each step sequentially.
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Step identifier |
model | object | yes | Model config for this step |
personality | string | yes | System prompt for this step |
spawns
Agent-to-agent (A2A) call configuration.
| Field | Type | Default | Description |
|---|---|---|---|
max_spawns | int | 5 | Max sub-agent calls per request (0 = unlimited) |
max_tokens_per_spawn | int | 4096 | Token budget per spawned call |
max_depth | int | 3 | Max recursion depth |
allowed_agents | []string | [] (all) | Whitelist of agents that can be spawned |
spawn_timeout_secs | int | 60 | Timeout per spawned call in seconds |
rate_limit
| Field | Type | Default | Description |
|---|---|---|---|
requests_per_minute | int | 0 (unlimited) | Max requests per minute |
sinaptic
Per-agent SinapticAI override.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | (inherit global) | Override global SinapticAI enabled flag |
Full Example
droid.yaml
edition: community
server:
openai_port: 8080
management_port: 8081
cors_origins: ["*"]
llm:
provider: openai
base_url: https://api.openai.com/v1
api_key: ${OPENAI_API_KEY}
default_model: gpt-4o-mini
anthropic:
api_key: ${ANTHROPIC_API_KEY}
embedding:
model: text-embedding-3-small
sinaptic:
enabled: true
mode: block
pii:
enabled: true
strategy: mask
log_blocked: true
logging:
level: info
format: json
audit:
enabled: false
log_path: ./logs/audit
retention_days: 30
data_dir: ./data
agents_dir: ./configs/agents
configs/agents/support-bot.yaml
name: support-bot
description: Customer support agent with RAG and tools
model:
name: gpt-4o
max_tokens: 2048
temperature: 0.5
personality: |
You are a helpful customer support agent.
Always be polite, concise, and accurate.
agent_type: react
execution_mode: parallel
chain_of_thought: true
max_iterations: 5
tool_timeout: 15s
memory:
short_term:
max_messages: 100
rag:
enabled: true
top_k: 5
min_score: 0.6
tools:
- name: lookup-order
type: rest_api
description: Look up order status by order ID
url: https://api.example.com/orders/${order_id}
method: GET
headers:
Authorization: "Bearer ${INTERNAL_API_KEY}"
timeout: 10s
rate_limit:
requests_per_minute: 60
spawns:
max_spawns: 3
max_depth: 2
allowed_agents: ["billing-agent", "shipping-agent"]