SinapticAI Security
SinapticAI is an AI intent firewall embedded in every Sinaptic® DROID+ instance. It inspects all incoming prompts and outgoing responses in real time, blocking prompt injection attacks, jailbreak attempts, and PII leaks before they reach the LLM.
No separate deployment, no external API calls — SinapticAI runs locally inside Sinaptic® DROID+ with sub-millisecond to low-millisecond latency.
How It Works
Every message passes through a security pipeline before reaching the LLM:
User prompt → SinapticAI check → LLM → SinapticAI check → Response
The check pipeline has multiple layers, depending on your edition:
| Layer | What It Detects | Latency | Edition |
|---|---|---|---|
| RegExp | Common injection patterns, known attack signatures | <2ms | Community |
| NER | Named entity recognition for PII (names, emails, IDs) | ~10ms | Pro / Enterprise |
| SLM | Small language model for nuanced intent classification | ~30ms | Pro / Enterprise |
Community Edition includes the RegExp layer, which catches approximately 70% of known attack vectors. Pro and Enterprise editions add the full cascade (RegExp → NER → SLM) for ~95%+ coverage with <50ms total latency.
Configuration
SinapticAI is configured at two levels: globally in droid.yaml and per-agent in each agent's YAML.
Global config (droid.yaml)
sinaptic:
enabled: true # Master switch
mode: "block" # "block" = reject threats, "log" = log only (shadow mode)
pii:
enabled: true # Enable PII detection and masking
strategy: "mask" # "mask" = replace PII with [REDACTED], "block" = reject entirely
log_blocked: true # Log blocked requests to audit trail
Per-agent config (configs/agents/my-agent.yaml)
sinaptic:
enabled: true # Can be disabled for specific agents
Security Modes
Block mode (default): Malicious prompts are rejected with an error response. The blocked request is logged to the audit trail.
{
"error": {
"message": "Request blocked by SinapticAI: prompt injection detected",
"type": "security_violation",
"code": "sinaptic_blocked"
}
}
Log mode (shadow): All prompts pass through to the LLM, but detected threats are logged. Useful for testing and tuning before enabling block mode in production.
PII Protection
When PII detection is enabled, SinapticAI scans both inputs and outputs for sensitive data:
- Email addresses
- Phone numbers
- Credit card numbers
- Social security numbers
- IP addresses
- Custom patterns (Pro/Enterprise)
With the mask strategy, detected PII is replaced with [REDACTED] before the message reaches the LLM. With the block strategy, the entire request is rejected.
What Gets Detected
The RegExp layer (Community) detects:
- Prompt injection — attempts to override the system prompt ("ignore previous instructions", role-play attacks, delimiter injection)
- Jailbreak patterns — known jailbreak templates (DAN, AIM, etc.)
- System prompt extraction — attempts to reveal the agent's system prompt
- PII patterns — emails, phone numbers, credit cards, SSNs via regex
The full cascade (Pro/Enterprise) additionally catches:
- Obfuscated attacks — encoded, transliterated, or multi-language injection attempts
- Context-aware PII — names, addresses, and other entities detected via NER
- Novel attacks — previously unseen attack patterns classified by the SLM
Audit Trail
When log_blocked is enabled, every security event is recorded in the audit log:
{
"timestamp": "2026-04-10T14:32:01Z",
"event": "sinaptic_blocked",
"agent": "my-agent",
"rule": "injection_override",
"input_snippet": "ignore all previous instructions and...",
"action": "blocked"
}
Audit logs are stored in the directory configured by audit.log_path (default: ./logs/audit/) with daily rotation and configurable retention.
Editions Comparison
| Feature | Community | Pro | Enterprise |
|---|---|---|---|
| RegExp layer | Yes | Yes | Yes |
| NER layer | — | Yes | Yes |
| SLM layer | — | Yes | Yes |
| Attack coverage | ~70% | ~95%+ | ~95%+ |
| Check latency | <2ms | <50ms | <50ms |
| PII detection | Regex patterns | Full NER | Full NER + custom |
| Custom rules | — | — | Yes |
| Custom SLM training | — | — | Yes |
Best Practices
- Start in log mode — run with
mode: "log"first to see what would be blocked without affecting users. - Always enable PII masking — even if you trust your users, the LLM's responses might contain PII from training data.
- Monitor the audit trail — review blocked requests periodically to tune your security posture.
- Enable per-agent — some internal agents (e.g., data pipelines) may not need security checks. Disable SinapticAI on a per-agent basis when appropriate.
Further Reading
- Editions — Full feature comparison across Community, Pro, and Enterprise
- Configuration Reference — All SinapticAI configuration options
- FAQ — Common questions about security and data handling