Skip to main content
Every task in dotbot’s queue has a state field that tracks where it sits in its lifecycle. The state machine is intentional: dotbot never skips states, every transition is driven by a named MCP tool call, and the full history is visible in .bot/workspace/tasks/. Understanding the states helps you interpret the dashboard, write conditional workflow steps, and debug stuck tasks.

Full state machine

todo

 ├──▶ analysing ──▶ analysed ──▶ in-progress ──▶ done
 │         │
 │         └──▶ needs-input ──▶ (back to analysing when answered)

 └──▶ skipped
Tasks live in a directory that matches their current state. For example, a task in the analysed state has its JSON file at .bot/workspace/tasks/analysed/<task-id>.json.

State descriptions

StateMeaning
todoCreated and waiting to be picked up; ordered by priority
analysingAnalysis phase is running; AI is exploring the codebase and building a context package
analysedAnalysis is complete; context package is attached and ready for implementation
in-progressImplementation phase is running; AI is writing code, running tests, and preparing commits
doneTask is complete; changes are committed and squash-merged to main
needs-inputAnalysis is paused; the AI has a question routed to a stakeholder
skippedTask was intentionally skipped; does not block downstream tasks

todo

The task has been created and is waiting to be picked up. Tasks in todo are ordered by priority (lower number runs first). The analysis process picks the next available todo task for the active workflow.

analysing

The analysis phase is running. The AI agent is exploring the codebase, identifying affected files, and building a context package. The task was transitioned here by task_mark_analysing.
If a process crashes during analysis, the task may be left in analysing with no active process. Run dotbot doctor to detect and reset stale locks.

analysed

Analysis is complete. The context package is attached to the task file and is ready for the implementation phase. The task was transitioned here by task_mark_analysed. If the analysis agent determined the task effort is XL, it may have proposed splitting the task before marking it analysed. See Auto-approve splits below.

in-progress

The implementation phase is running. The AI agent is writing code in the task’s isolated worktree, running tests, and preparing commits. The task was transitioned here by task_mark_in_progress.

done

The task is complete. Code changes have been committed with a [task:XXXXXXXX] tag and squash-merged to the main branch. The worktree and task branch have been cleaned up. The task was transitioned here by task_mark_done.

needs-input

The analysis agent encountered a question it cannot answer from the codebase alone. The task is paused and the question has been routed to a stakeholder via Teams, Email, or Jira. The task was transitioned here by task_mark_needs_input. Once a human answers the question via task_answer_question, the task returns to analysing and the session resumes with the answer included in context.
The question_timeout_hours setting controls how long dotbot waits for an answer before flagging the task as overdue. Questions that exceed the timeout appear on the Overview tab in the dashboard.

skipped

The task was intentionally skipped. This applies to optional tasks whose condition evaluated to false, or tasks that were explicitly skipped via task_mark_skipped. Skipped tasks do not block downstream tasks that depend on them via depends_on.

Auto-approve splits

When the analysis agent determines a task is too large (XL effort), it proposes splitting it into smaller sub-tasks. You control what happens next with the analysis.auto_approve_splits setting:
  • false (default) — The dashboard shows a confirmation prompt. You review the proposed sub-tasks and approve or reject the split using task_approve_split.
  • true — The split is applied automatically without human confirmation.
Keep auto_approve_splits off during early project setup when you want to stay closely involved in how the work is broken down. Switch it on for batch automation runs where speed matters more than supervision.

Split threshold

dotbot’s analysis agent classifies each task by effort: XS, S, M, L, or XL. Only XL tasks trigger a split proposal. The classification is made by the AI based on the context package — the number of files affected, complexity of the change, and breadth of test coverage required.

Task priority ordering

Tasks within the same workflow run in ascending priority order. Tasks with the same priority may run concurrently when multiple slots are available. Priority is set in workflow.yaml and can also be patched directly on the task JSON file.

MCP tools that drive transitions

The dotbot MCP server exposes one tool per state transition. These are called by the AI agent during autonomous execution, but you can also call them manually from your AI tool’s MCP interface:
ToolTransition
task_mark_analysingtodo → analysing
task_mark_analysedanalysing → analysed
task_mark_in_progressanalysed → in-progress
task_mark_donein-progress → done
task_mark_needs_inputanalysing → needs-input
task_mark_skippedtodo → skipped
task_mark_todoAny state → todo (reset)
task_answer_questionAttaches a human answer; returns task to analysing
task_approve_splitConfirms a proposed task split