Skip to main content
Every task dotbot works on goes through two distinct AI sessions: an analysis phase that resolves ambiguity and builds a context package, and an implementation phase that consumes that package to write code, run tests, and commit. Keeping the two phases separate means the implementer gets a clean, focused brief rather than a large open-ended codebase dump — and it lets dotbot propose task splits or collect human input before any code is written.

Phase 1 — Analysis

The analysis phase runs the 98-analyse-task.md workflow on a task in todo state. The AI agent:
  1. Explores the codebase and identifies the files that will be affected
  2. Builds a context package — a structured brief saved alongside the task
  3. Evaluates whether the task is well-defined, or whether it needs to be split into smaller pieces
  4. Requests human input if a question cannot be answered by reading the codebase
During this phase the task transitions through two states:
todo → analysing → analysed
Tasks of type script, mcp, or task_gen skip the analysis phase entirely. They auto-promote to analysed and proceed directly to execution.
If the agent determines the task effort is XL, it may propose splitting the task into smaller sub-tasks. If analysis.auto_approve_splits is enabled in your settings, the split is applied automatically; otherwise the dashboard shows a confirmation prompt for you to review.

Phase 2 — Implementation

The implementation phase runs the 99-autonomous-task.md workflow on a task in analysed state. The AI agent:
  1. Reads the context package produced by analysis
  2. Writes code in the task’s isolated git worktree
  3. Runs the project’s test suite and verify hooks
  4. Commits changes with a tag in the format [task:XXXXXXXX]
During this phase the task transitions:
analysed → in-progress → done
Every commit message produced by dotbot includes the short task ID tag so you can trace any change back to the task that produced it:
feat: add user authentication [task:a1b2c3d4]

Multi-slot concurrent execution

dotbot can run multiple tasks from the same workflow in parallel. The workflow engine uses slot-aware locking so that up to N analysis processes and N execution processes can run simultaneously without stepping on each other. Each concurrent task runs in its own git worktree (see Per-task git worktree isolation), so there are no file conflicts. You control the concurrency limit via the execution.max_concurrent setting in settings.default.json or from the Settings tab in the dashboard.
Use cheaper models (Sonnet) for simple tasks and reserve more capable models (Opus) for complex ones. Per-task model selection reduces token spend without sacrificing quality where it matters.

Per-task model selection

Each task in a workflow can specify a model field that overrides the process-level default. You can also configure defaults for analysis and execution separately in your settings:
{
  "analysis": { "model": "Sonnet" },
  "execution": { "model": "Opus" }
}
Model resolution order: task-level override → UI settings → settings.default.json → provider default.

Process types

dotbot runs different process types depending on what the workflow needs to accomplish. You can see active processes in real time on the Processes tab in the dashboard.
TypeDescription
analysisRuns the analysis phase on a specific task or task queue
executionRuns the implementation phase on a specific task or task queue
task-runnerUnified analyse-then-execute loop per task (combines both phases)
kickstartProduct setup flow (interview → documents → task roadmap)
planningAd-hoc planning prompt (no task context)
commitCommits bot state to the repo
task-creationCreates tasks from a prompt without running the full kickstart pipeline
Each running process is tracked in the Processes tab of the dashboard, showing its type, active model, current task, and elapsed time.

Flow diagram

┌──────────────┐     ┌───────────────┐     ┌───────────────────┐
│   todo       │────▶│   analysing   │────▶│    analysed       │
│  (waiting)   │     │ (Phase 1 AI)  │     │ (context ready)   │
└──────────────┘     └───────────────┘     └────────┬──────────┘

                           ┌────────────────────────▼──────────────────────────┐
                           │             Implementation phase                   │
                           │   analysed → in-progress → done                   │
                           │   (Phase 2 AI — writes code, tests, commits)      │
                           └───────────────────────────────────────────────────┘
Human-in-the-loop transitions can occur at any point during analysis. When the agent cannot resolve a question from the codebase, it marks the task needs-input and routes the question to a stakeholder via Teams, Email, or Jira. The task resumes from needs-input → analysing once an answer is received.