Model Providers

OpenClaw model providers, local models, and hybrid fallback setups

Most OpenClaw setup questions are really model questions: which provider is stable, which auth flow is safe, when local models are good enough, and how to keep a fallback ready when quotas or policies change.

What people repeatedly need

A practical way to compare Codex OAuth, Anthropic API keys, and Ollama without reading five docs tabs.

Local-only and hosted-plus-local fallback recipes that do not break tool calling.

A clear cost and quota view before GPT-5 Codex or other premium models start burning credits.

What this hub covers

Provider choice by use case

Use hosted frontier models for reliability, local models for privacy and cost, and hybrid fallbacks when you need both. The right answer depends on response quality, tool support, and whether you are running a personal or team bot.

OpenAI Codex Anthropic Ollama OpenRouter

OAuth, API keys, and policy risk

Subscription auth looks convenient until providers change policy. This page set should help users choose the safer auth path, separate work and personal profiles, and avoid getting locked out later.

OAuth API Keys Profiles Policy Risk

Local and hybrid setups

OpenClaw users frequently want a local-first setup or a hosted-primary plus local-fallback configuration. That requires more than a model name. It needs the right base URL, native tool support, and predictable fallback order.

Ollama LM Studio Fallbacks Tool Calling

Cost control and usage visibility

People want to know which commands expose usage, where quota data shows up, and when to move noisy automations onto cheaper models. Good provider guidance should include those answers up front.

Usage Tracking Quota Token Burn Model Routing

Start here

  1. Pick one safe default provider before adding a second or third model.
  2. Decide whether you need API key auth, OAuth, or a local runtime.
  3. Set one fallback before you expose OpenClaw to always-on channels.
  4. Document expected cost and failure mode for each model you enable.