Model Providers
Providers
OpenClaw supports multiple LLM backends. The default pattern is simple: finish provider authentication, then set the default model with the provider/model key. At the beginning, get one reliable primary model working before you add more.
Common Provider Paths
OpenAI
Use either API keys or Codex / subscription auth. A common fit for coding, general reasoning and mixed-model workflows.
Anthropic
The common paths are API keys or setup-token. Server environments usually prefer reproducible keys or tokens.
Venice AI
A privacy-first route, often starting with venice/llama-3.3-70b and switching upward only when stronger reasoning is needed.
Moonshot / GLM / MiniMax
Useful when you want to optimize for region, price or model preference with a more granular routing strategy.
OpenRouter / AI Gateway
These are useful when you want a unified billing layer or a central gateway across many model backends.
Ollama / self-hosted
A stronger fit for privacy-heavy environments, as long as your local model quality and hardware are sufficient.
Shortest Setup Path
Authenticate through onboarding
openclaw onboard Check model status
openclaw models status Set the default model
provider/model.{
agents: {
defaults: {
model: { primary: "anthropic/claude-opus-4-5" }
}
}
} Selection Advice
Turn on one reliable primary model first, then get Gateway, channels and tools working before adding more providers.
Multiple providers are usually about price, region, capability and fallback strategy, not about collecting as many as possible.
On servers, prefer API keys or setup-token flows that can be reproduced and rotated instead of auth state tied to one local machine.
Provider Guides
API key setup, Codex OAuth and default model naming conventions.
API keys, setup-token and authentication advice for remote servers.
Privacy-first setup, model naming and switching strategy.
A typical regional-provider path and a useful reference for GLM or MiniMax decisions.