dotbot integrates testing directly into the task lifecycle through verify hooks — numbered PowerShell scripts that run automatically after each AI task completes. Before marking a task as done and squash-merging it to your main branch, dotbot runs every verify hook in ascending numeric order. If any hook exits with a non-zero code, the task is flagged and the merge is blocked.
How verify hooks work
Each installed stack contributes numbered verify scripts to .bot/hooks/verify/. dotbot runs them in order from lowest to highest prefix number. The default hooks included with a fresh installation are:
| Script | What it checks |
|---|
00-privacy-scan.ps1 | Scans staged files for secrets or sensitive data using gitleaks |
01-git-clean.ps1 | Confirms the working tree has no uncommitted changes |
02-git-pushed.ps1 | Verifies the task branch has been pushed to the remote |
When you install a technology stack (for example, dotnet), it adds additional numbered scripts that run your tech-specific test commands:
| Stack | Additional hook | What it runs |
|---|
dotnet | 03-dotnet-build.ps1 | dotnet build |
dotnet | 04-dotnet-test.ps1 | dotnet test |
Hooks from all installed stacks run together in a single ordered sequence, giving you a composable quality gate pipeline.
Adding a custom verify hook
To add your own test step, create a numbered PowerShell script in .bot/hooks/verify/. Choose a number higher than the built-in hooks to ensure your tests run after the standard checks:
# .bot/hooks/verify/10-run-my-tests.ps1
#!/usr/bin/env pwsh
Write-Output "Running integration tests..."
$result = & npx jest --ci 2>&1
if ($LASTEXITCODE -ne 0) {
Write-Error "Integration tests failed."
exit 1
}
Write-Output "All tests passed."
Use a prefix between 10 and 99 for custom hooks so there is room to add more standard hooks later without renumbering your scripts.
Viewing hook results
Verify hook output is captured in the task’s session log and shown in the dashboard. When a hook fails:
- The task stays in
in-progress — it is not marked done
- The failure reason and hook output appear in the session log under
.bot/workspace/sessions/
- The Workflow tab in the dashboard shows the task as blocked with the hook’s exit code
To investigate, open the session log for the failed task and look for the hook name and the error output it produced.
Skipping verify hooks
script, mcp, and task_gen task types skip verify hooks automatically because they do not involve AI code generation. For prompt and prompt_template tasks, hooks always run before the task is considered complete.
Run dotbot doctor to check whether verify hooks in your project have any syntax issues or reference missing commands. Doctor scans hook scripts and reports problems before you start a workflow run.
Running the dev environment for tests
Some verify hooks require a running development server. dotbot provides two MCP tools — dev_start and dev_stop — that the AI agent calls at the start and end of task execution when a running dev environment is needed. See Plans & Steering for details on these tools.