turtles connect llm
Configure LLM connections for Turtle DNA compilation.
OpenTurtles uses LLMs to compile your one-sentence descriptions into executable code (Turtle DNA). You can connect multiple LLM servers and switch between them.
turtles connect llm add
Add or update an LLM server (interactive wizard)
Usage: turtles connect llm add <NAME>
Arguments:
<NAME> Server name (e.g. "openai", "deepseek", "local")
Options:
-h, --help Print helpExample:
$ turtles connect llm add openai
🔌 Add LLM Server: openai
Select provider:
1) OpenAI
2) Anthropic
3) Ollama (local)
4) Custom (OpenAI-compatible)
Choice [1-4]: 1
Endpoint: https://api.openai.com/v1
API Key: sk-••••••••••••
Model [gpt-5.4]: gpt-5.4
✓ LLM server 'openai' added!
Provider: openai
Model: gpt-5.4
Default: yes ✓
ℹ Test with: turtles connect llm test openaiSupported Providers:
https://api.openai.com/v1, Default model: gpt-5.4https://api.anthropic.com, Default model: claude-opus-4.6-20260401http://localhost:11434/v1, Default model: llama4, no API key neededAll providers use the OpenAI-compatible /v1/chat/completions API format.
turtles connect llm list
$ turtles connect llm list
LLM Servers (2 configured)
● openai ★ default
Provider: openai Model: gpt-5.4 Endpoint: https://api.openai.com/v1
● deepseek
Provider: custom Model: deepseek-chat Endpoint: https://api.deepseek.com/v1turtles connect llm test
$ turtles connect llm test openai
✓ LLM server 'openai' connected!
Response: Hello! How can I help you today?turtles connect llm default
Set which LLM server is used for Turtle creation and updates.
$ turtles connect llm default deepseek
✓ Default LLM set to 'deepseek'.The first server you add is automatically set as default.
turtles connect llm remove
$ turtles connect llm remove openai
✓ Removed LLM server 'openai'.Using Ollama (Local LLM)
Run Turtles with zero API cost using a local model:
$ turtles connect llm add local
🔌 Add LLM Server: local
Select provider:
1) OpenAI
2) Anthropic
3) Ollama (local)
4) Custom (OpenAI-compatible)
Choice [1-4]: 3
Endpoint URL: http://localhost:11434/v1
Model [llama4]: llama4
✓ LLM server 'local' added!
Provider: ollama
Model: llama4
Default: no
ℹ Test with: turtles connect llm test localSet it as default to compile all Turtles locally:
$ turtles connect llm default local
✓ Default LLM set to 'local'.Configuration
LLM configs are stored at ~/.turtles/llm/servers.json:
{
"openai": {
"provider": "openai",
"endpoint": "https://api.openai.com/v1",
"api_key": "sk-...",
"model": "gpt-5.4"
}
}The default server name is stored at ~/.turtles/llm/default.