Skip to main content

YAML Configuration Reference

Sinaptic® DROID+ uses two types of YAML configuration files:

  1. Main config (droid.yaml) — global server, LLM, security, and logging settings
  2. Agent configs (configs/agents/*.yaml) — per-agent model, personality, tools, and behaviour

All config values support ${ENV_VAR} substitution — Sinaptic® DROID+ resolves environment variables at startup.


Main Config (droid.yaml)

edition

ValueDescription
communityFree edition, max 2 agents, RegExp-only security (default)
proUnlimited agents, full SinapticAI cascade
enterpriseMulti-tenancy, compliance, HA

server

FieldTypeDefaultDescription
openai_portint8080Port for the OpenAI-compatible API
management_portint8081Port for the Management API
cors_origins[]string["*"]Allowed CORS origins
api_keystringAPI key for authentication (supports ${ENV_VAR})
demo_modeboolfalseEnable demo mode

llm

Primary LLM provider configuration.

FieldTypeDefaultDescription
providerstringopenaiLLM provider name (openai, anthropic, gemini, grok, ollama, lm_studio, llama_cpp)
base_urlstringhttps://api.openai.com/v1API base URL
api_keystring${OPENAI_API_KEY}API key (supports ${ENV_VAR})
default_modelstringgpt-4o-miniDefault model name

anthropic

FieldTypeDefaultDescription
base_urlstringhttps://api.anthropic.comAnthropic API base URL
api_keystring${ANTHROPIC_API_KEY}API key

gemini

FieldTypeDefaultDescription
base_urlstringhttps://generativelanguage.googleapis.com/v1betaGemini API base URL
api_keystring${GEMINI_API_KEY}API key

grok

FieldTypeDefaultDescription
base_urlstringhttps://api.x.ai/v1Grok (xAI) API base URL
api_keystring${GROK_API_KEY}API key

ollama

FieldTypeDefaultDescription
base_urlstringhttp://localhost:11434/v1Ollama API base URL

lm_studio

FieldTypeDefaultDescription
base_urlstringhttp://localhost:1234/v1LM Studio API base URL

llama_cpp

FieldTypeDefaultDescription
base_urlstringhttp://localhost:8080/v1llama.cpp server base URL

embedding

FieldTypeDefaultDescription
modelstringtext-embedding-3-smallEmbedding model name
base_urlstring(falls back to llm.base_url)Embedding API base URL
api_keystring(falls back to llm.api_key)Embedding API key

sinaptic

SinapticAI security layer configuration.

FieldTypeDefaultDescription
enabledbooltrueEnable SinapticAI security scanning
modestringblockAction on threat detection: block or log
log_blockedbooltrueLog blocked requests

sinaptic.pii

FieldTypeDefaultDescription
enabledbooltrueEnable PII detection
strategystringmaskPII handling strategy: mask or block

mcp

Model Context Protocol (MCP) tool server configuration.

mcp:
servers:
my-server:
command: "npx"
args: ["-y", "@my-org/mcp-server"]
env:
API_KEY: "${MY_API_KEY}"
FieldTypeDescription
serversmapNamed MCP server definitions

Per server:

FieldTypeDescription
commandstringCommand to launch the MCP server
args[]stringCommand arguments
envmap[string]stringEnvironment variables for the server process

logging

FieldTypeDefaultDescription
levelstringinfoLog level (debug, info, warn, error)
formatstringjsonLog format (json or text)

audit

FieldTypeDefaultDescription
enabledboolfalseEnable audit logging
log_pathstring./logs/auditPath for audit log files
retention_daysint30Days to retain audit logs

branding

FieldTypeDefaultDescription
show_badgebool(auto)Show "Powered by Sinaptic® DROID+" badge. Auto: true for community, false for pro/enterprise

Top-level paths

FieldTypeDefaultDescription
data_dirstring./dataDirectory for runtime data (embeddings, state)
agents_dirstring./configs/agentsDirectory containing agent YAML configs

Agent Config (configs/agents/*.yaml)

Top-level fields

FieldTypeRequiredDefaultDescription
namestringyes(filename)Unique agent identifier
descriptionstringnoHuman-readable description
modelobjectyesLLM model configuration
personalitystringyesSystem prompt for the agent
agent_typestringnoplainProcessing loop: plain or react
execution_modestringnosequentialTool execution: sequential or parallel
chain_of_thoughtboolnofalseEnable chain-of-thought reasoning
max_iterationsintno10Max ReAct loop iterations
tool_timeoutstringno30sDefault timeout for tool calls

model

FieldTypeDefaultDescription
namestringModel name (e.g., gpt-4o, claude-sonnet-4-20250514)
base_urlstring(from main config)Override LLM API base URL
api_keystring(from main config)Override API key
max_tokensint1024Maximum response tokens
temperaturefloat0.7Sampling temperature (0-2)

memory.short_term

FieldTypeDefaultDescription
max_messagesint50Sliding-window conversation history size

rag

FieldTypeDefaultDescription
enabledboolfalseEnable retrieval-augmented generation
top_kint3Number of chunks to retrieve
min_scorefloat0.7Minimum similarity score threshold

tools

Array of tool configurations.

FieldTypeRequiredDescription
namestringyesTool identifier
descriptionstringnoTool description (shown to the LLM)
typestringyesTool type: mcp, rest_api
urlstringrest_apiEndpoint URL
methodstringrest_apiHTTP method (GET, POST, PUT, DELETE)
headersmaprest_apiHTTP headers
timeoutstringrest_apiRequest timeout (e.g., 10s)

pipeline

Multi-LLM pipeline steps (optional). When defined, the request passes through each step sequentially.

FieldTypeRequiredDescription
namestringyesStep identifier
modelobjectyesModel config for this step
personalitystringyesSystem prompt for this step

spawns

Agent-to-agent (A2A) call configuration.

FieldTypeDefaultDescription
max_spawnsint5Max sub-agent calls per request (0 = unlimited)
max_tokens_per_spawnint4096Token budget per spawned call
max_depthint3Max recursion depth
allowed_agents[]string[] (all)Whitelist of agents that can be spawned
spawn_timeout_secsint60Timeout per spawned call in seconds

rate_limit

FieldTypeDefaultDescription
requests_per_minuteint0 (unlimited)Max requests per minute

sinaptic

Per-agent SinapticAI override.

FieldTypeDefaultDescription
enabledbool(inherit global)Override global SinapticAI enabled flag

Full Example

droid.yaml

edition: community

server:
openai_port: 8080
management_port: 8081
cors_origins: ["*"]

llm:
provider: openai
base_url: https://api.openai.com/v1
api_key: ${OPENAI_API_KEY}
default_model: gpt-4o-mini

anthropic:
api_key: ${ANTHROPIC_API_KEY}

embedding:
model: text-embedding-3-small

sinaptic:
enabled: true
mode: block
pii:
enabled: true
strategy: mask
log_blocked: true

logging:
level: info
format: json

audit:
enabled: false
log_path: ./logs/audit
retention_days: 30

data_dir: ./data
agents_dir: ./configs/agents

configs/agents/support-bot.yaml

name: support-bot
description: Customer support agent with RAG and tools

model:
name: gpt-4o
max_tokens: 2048
temperature: 0.5

personality: |
You are a helpful customer support agent.
Always be polite, concise, and accurate.

agent_type: react
execution_mode: parallel
chain_of_thought: true
max_iterations: 5
tool_timeout: 15s

memory:
short_term:
max_messages: 100

rag:
enabled: true
top_k: 5
min_score: 0.6

tools:
- name: lookup-order
type: rest_api
description: Look up order status by order ID
url: https://api.example.com/orders/${order_id}
method: GET
headers:
Authorization: "Bearer ${INTERNAL_API_KEY}"
timeout: 10s

rate_limit:
requests_per_minute: 60

spawns:
max_spawns: 3
max_depth: 2
allowed_agents: ["billing-agent", "shipping-agent"]