LM Studio
Run AI agents with local models using LM Studio. LM Studio provides a GUI for downloading, managing, and serving local models with an OpenAI-compatible API.
Setup
- Download LM Studio from lmstudio.ai
- Open LM Studio, download a model from the built-in model browser
- Start the local server (Server tab → Start Server)
- LM Studio serves on
localhost:1234by default. Configure indroid.yaml:
lm_studio:
base_url: "http://localhost:1234/v1"
No API key required.
Agent Config
name: "local-agent"
model:
provider: "lm_studio"
name: "local-model" # Use the model identifier shown in LM Studio
max_tokens: 2048
temperature: 0.7
The model.name should match the model loaded in LM Studio. You can check the current model in LM Studio's Server tab.
Notes
- LM Studio provides a friendly GUI for model management — useful if you prefer a visual interface over Ollama's CLI.
- The OpenAI-compatible API means it works seamlessly with Sinaptic® DROID+.
- Tool use (function calling) support depends on the model you load.
- LM Studio supports GGUF model format. Most popular open models are available in GGUF.
- If running Sinaptic® DROID+ in Docker, use
host.docker.internal:1234to connect to LM Studio on the host machine.