Skip to main content
dotbot integrates testing directly into the task lifecycle through verify hooks — numbered PowerShell scripts that run automatically after each AI task completes. Before marking a task as done and squash-merging it to your main branch, dotbot runs every verify hook in ascending numeric order. If any hook exits with a non-zero code, the task is flagged and the merge is blocked.

How verify hooks work

Each installed stack contributes numbered verify scripts to .bot/hooks/verify/. dotbot runs them in order from lowest to highest prefix number. The default hooks included with a fresh installation are:
ScriptWhat it checks
00-privacy-scan.ps1Scans staged files for secrets or sensitive data using gitleaks
01-git-clean.ps1Confirms the working tree has no uncommitted changes
02-git-pushed.ps1Verifies the task branch has been pushed to the remote
When you install a technology stack (for example, dotnet), it adds additional numbered scripts that run your tech-specific test commands:
StackAdditional hookWhat it runs
dotnet03-dotnet-build.ps1dotnet build
dotnet04-dotnet-test.ps1dotnet test
Hooks from all installed stacks run together in a single ordered sequence, giving you a composable quality gate pipeline.

Adding a custom verify hook

To add your own test step, create a numbered PowerShell script in .bot/hooks/verify/. Choose a number higher than the built-in hooks to ensure your tests run after the standard checks:
# .bot/hooks/verify/10-run-my-tests.ps1
#!/usr/bin/env pwsh

Write-Output "Running integration tests..."

$result = & npx jest --ci 2>&1
if ($LASTEXITCODE -ne 0) {
    Write-Error "Integration tests failed."
    exit 1
}

Write-Output "All tests passed."
Use a prefix between 10 and 99 for custom hooks so there is room to add more standard hooks later without renumbering your scripts.

Viewing hook results

Verify hook output is captured in the task’s session log and shown in the dashboard. When a hook fails:
  1. The task stays in in-progress — it is not marked done
  2. The failure reason and hook output appear in the session log under .bot/workspace/sessions/
  3. The Workflow tab in the dashboard shows the task as blocked with the hook’s exit code
To investigate, open the session log for the failed task and look for the hook name and the error output it produced.

Skipping verify hooks

script, mcp, and task_gen task types skip verify hooks automatically because they do not involve AI code generation. For prompt and prompt_template tasks, hooks always run before the task is considered complete.
Run dotbot doctor to check whether verify hooks in your project have any syntax issues or reference missing commands. Doctor scans hook scripts and reports problems before you start a workflow run.

Running the dev environment for tests

Some verify hooks require a running development server. dotbot provides two MCP tools — dev_start and dev_stop — that the AI agent calls at the start and end of task execution when a running dev environment is needed. See Plans & Steering for details on these tools.