Quick Start
Get your first AI agent running in 5 minutes.
Prerequisites
- An API key from any supported LLM provider (OpenAI, Anthropic, Google, xAI, or a local model)
- macOS, Linux, or Windows (WSL)
Install
Script (recommended):
curl -fsSL https://get.droid.plus | sh
Docker:
docker pull sinapticai/droid:latest
Binary download:
Download the binary for your platform from GitHub Releases.
| Platform | Architecture | Download |
|---|---|---|
| Linux | amd64 | droid-linux-amd64 |
| Linux | arm64 | droid-linux-arm64 |
| macOS | Apple Silicon | droid-darwin-arm64 |
| macOS | Intel | droid-darwin-amd64 |
| Windows | amd64 | droid-windows-amd64.exe |
Create Your First Agent
Option A: AI Builder UI (no code)
Start Sinaptic® DROID+ and open the visual builder:
droid up
# Open http://localhost:8081/builder/ in your browser
The AI Builder UI lets you configure your agent visually — pick a model, write a system prompt, add tools, and test it live. When you're done, it generates the YAML config for you.
Option B: YAML Config
Initialize a new project:
droid init my-agent
cd my-agent
This creates a project directory with:
my-agent/
├── droid.yaml # Runtime config (ports, LLM keys, security)
├── configs/agents/
│ └── my-agent.yaml # Your agent definition
├── .env.example # API key template
└── data/ # Runtime data (RAG, logs)
Edit .env.example → .env with your API key:
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY (or any other provider key)
Start the runtime:
droid up
You'll see:
____ ____ ___ ___ ____
| _ \| _ \ / _ \|_ _| _ \ _
| | | | |_) | | | || || | | |(_)_
| |_| | _ <| |_| || || |_| | _|
|____/|_| \_\\___/|___|____/ (_)
Sinaptic.AI DROID+ v0.5.0
Agent runtime engine starting...
✓ Loaded 1 agent: my-agent
✓ SinapticAI security: enabled (community mode)
✓ OpenAI-compatible API: http://localhost:8080/v1
✓ Management API: http://localhost:8081
✓ Agent Builder UI: http://localhost:8081/builder/
Talk to Your Agent
Sinaptic® DROID+ exposes an OpenAI-compatible API, so you can use any OpenAI SDK or just curl:
curl:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "my-agent",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Python (OpenAI SDK):
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="any")
response = client.chat.completions.create(
model="my-agent",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Node.js (OpenAI SDK):
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:8080/v1',
apiKey: 'any',
});
const response = await client.chat.completions.create({
model: 'my-agent',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);
The model field maps to the agent name in your YAML config. The api_key can be any string for local development (unless you've configured authentication in droid.yaml).
Agent Config Explained
Here's a minimal agent config (configs/agents/my-agent.yaml):
name: "my-agent"
description: "Customer support assistant"
model:
name: "gpt-4o-mini" # Any model from your configured provider
max_tokens: 1024
temperature: 0.7
personality: |
You are a helpful customer support assistant.
Be concise and accurate. Always be polite.
tools:
- name: "current_time"
type: "builtin"
sinaptic:
enabled: true # Enable security checks
Key fields:
- name — unique agent identifier, used as the
modelin API calls - model.name — the LLM model to use (e.g.,
gpt-4o-mini,claude-sonnet-4-20250514,gemini-2.0-flash) - personality — system prompt that defines the agent's behavior
- tools — built-in tools, REST API endpoints, or MCP servers the agent can use
- sinaptic.enabled — toggle SinapticAI security (prompt injection detection, PII masking)
CLI Commands
droid up # Start the runtime (loads all agents)
droid down # Gracefully stop a running instance
droid init NAME # Create a new agent project
droid agents # List loaded agents and their status
droid health # Check server health
droid version # Print version, commit, and build date
Next Steps
- LLM Providers — Configure OpenAI, Anthropic, Gemini, Grok, or local models
- MCP Tools — Add external tools via Model Context Protocol
- SinapticAI Security — Understand the built-in security layer
- Docker Deployment — Run Sinaptic® DROID+ in production with Docker
- Editions — Compare Community, Pro, and Enterprise features