Skip to main content
dotbot is a multi-provider AI development framework. Rather than committing to a single AI service, you can switch between Claude, Codex, and Gemini at any time from the Settings tab in the dashboard — or mix providers across tasks by specifying a model override in workflow.yaml. Each provider ships with its own CLI wrapper and stream parser, so the rest of the workflow engine stays unchanged regardless of which provider you choose.

Supported providers

The three supported providers each expose different permission modes that control how autonomous the AI is during execution.
ProviderCLI binaryPermission modesNotes
Claudeclaudebypass, autoauto uses AI-classified safety checks; bypass skips all permission prompts
Codexcodexbypass, full-autofull-auto runs without any confirmation steps
GeminigeminiYOLO, auto-editYOLO grants full file-system access with no confirmation
bypass, full-auto, and YOLO modes skip all confirmation prompts. Only use these modes in isolated environments where unreviewed file-system changes are acceptable.

Switching providers

Open the web dashboard and go to the Settings tab. Select your preferred provider from the Provider dropdown. The change takes effect for the next task that starts — running tasks are not interrupted. You can also set the provider directly in settings.default.json:
settings.default.json
{
  "provider": "claude",
  "permission_mode": null
}
Set permission_mode to a valid value from the table above to override the provider’s default. Leave it null to use each provider’s own default mode.

Per-task model overrides

Individual tasks in workflow.yaml can specify a model that overrides the process-level default set in settings.default.json. This is the primary way to reduce costs — use a lighter model for straightforward tasks and reserve a larger model for complex reasoning.
workflow.yaml
tasks:
  - name: "Plan Internet Research"
    type: task_gen
    workflow: "02a-plan-internet-research.md"
    model: Sonnet          # lighter task — Sonnet is sufficient

  - name: "Synthesise Research"
    type: prompt
    workflow: "04-post-research-review.md"
    model: Opus            # explicit large model for open-ended synthesis
Use Sonnet (or an equivalent mid-tier model) for task_gen and structured output tasks. Save Opus for open-ended analysis and synthesis tasks. Per-task model selection can meaningfully reduce token spend on large pipelines without sacrificing quality where it matters.

Setting analysis and execution models separately

The analysis.model and execution.model fields in settings.default.json set the baseline for all tasks that do not declare their own model. You can point analysis and execution at different models to balance cost and quality across phases:
settings.default.json
{
  "analysis": {
    "model": "Opus"
  },
  "execution": {
    "model": "Opus"
  }
}
Any task-level model field takes precedence over both of these baseline values.

Model resolution order

When dotbot resolves which model to use for a task, it evaluates the following sources in order, from highest to lowest priority:
  1. The model field on the individual task in workflow.yaml
  2. The execution.model (or analysis.model) field in settings.default.json
  3. The provider’s built-in default
The first non-null value in this chain wins.

Provider detection in the dashboard

When you open the Settings tab, dotbot runs a detection pass for each known provider:
  • Installed — checks whether the CLI executable is on your PATH.
  • Version — calls the CLI’s version flag and parses the output.
  • Auth status — verifies that the expected environment variable (ANTHROPIC_API_KEY, OPENAI_API_KEY, or GEMINI_API_KEY) is present and that the provider responds to a lightweight probe.
Providers that are not installed or not authenticated are flagged as unavailable so you can resolve issues before starting a workflow run.
Authentication is validated against environment variables at detection time. If you update a key, reopen the Settings tab to trigger a fresh detection pass.