The Bugzy dashboard at bugzy.ai is the primary interface for managing projects, reviewing test results, and configuring integrations. All sections are accessible from the left sidebar after signing in. The team sidebar navigation includes: Projects, Channels, Members, Settings, and Billing.Documentation Index
Fetch the complete documentation index at: https://www.bugzy.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Projects
The main entry point. Lists all projects in your team with:- Project name and base URL
- Last execution status and timestamp
- Test coverage summary (total test cases, pass rate)
Test Plans
View and manage test plans for the selected project.- Generate: Click Generate Test Plan and provide a product description. Bugzy explores your app and creates a structured
test-plan.md. - View/Edit: Test plans display as rendered markdown. Edit directly in the dashboard or in your Git repo — changes sync both ways.
- Regenerate: Trigger a new generation to update the plan based on application changes.
Test Cases
Lists all test cases for the project in a table view with:- Test case ID (e.g., TC-001)
- Title and description
- Status (active, skipped, deprecated)
- Last run result
- Associated test area
tests/specs/.
Individual run: Run a single test case directly from the drawer without triggering the full suite.
Test Runs
Execution history for the project, ordered by most recent.- Status: Running, Passed, Failed, Partially Passed
- Counts: Total tests, passed, failed, skipped
- Duration: Execution time in seconds
- Triage summary: Product bugs found, test issues auto-fixed
- Status chips: Toggle Passed, Failed, or Timed Out to show only matching runs
- Date range: Select a preset (Last 24h, 7 days, or 30 days) or view all time
- Search: Type a test case name to find runs that included that test
Explorations
Browse exploration reports generated during onboarding and test generation tasks.- Application structure maps
- Screenshots of discovered pages
- Feature inventories
- Navigation flow documentation
Connections
Manage integrations with external services. Each integration connects via Nango OAuth from the dashboard — no manual token management.| Integration | Purpose | Setup |
|---|---|---|
| GitHub | Repository access, webhooks, PR comments | OAuth via GitHub App |
| Slack | Test result notifications, team messaging | OAuth via Nango |
| Microsoft Teams | Test result notifications, team messaging | Azure Bot |
| Jira Cloud | Bug filing, project context | OAuth via Nango |
| Jira Server | Bug filing (on-premise) | MCP Tunnel |
| Azure DevOps | Bug filing, work item tracking | OAuth via Nango |
| Asana | Bug filing, task tracking | OAuth via Nango |
| Linear | Bug filing, issue tracking | OAuth via Nango |
| Notion | Documentation search for test context | OAuth via Nango |
| Confluence | Documentation search | OAuth via Nango |
| Zephyr Scale | Test case management | API token |
| Recall.ai | Meeting transcription | Webhook |
Scheduling
Configure automated task execution:Scheduled tasks
Set up cron-based schedules for recurring test runs:- Frequency: Hourly, daily, weekly, or custom cron expression
- Task: Which task to execute (e.g.,
run-tests,verify-changes) - Parameters: Test selection, target environment, notification preferences
Event triggers
Map external events to Bugzy tasks:- GitHub push to main -> Run full regression
- GitHub PR opened -> Run related tests
- Deployment webhook -> Run smoke tests
- Slack message -> Handle via
handle-messagetask
Settings
Project-level configuration:- Base URL: The application URL for browser automation
- Environment variables: Key-value pairs injected into test execution (e.g.,
TEST_USER_EMAIL,TEST_BASE_URL) - Context files: Additional markdown files that provide the agent with project-specific knowledge
- Test data: Manage
.env.testdatavalues for non-secret test configuration - Project info: Name, description, and team assignment
Named environments
Environment variables support named environments — separate configurations for different targets like staging, production, or QA.- Default variables apply to all runs when no environment is specified.
- Named environments (e.g.,
staging,production) override or extend the defaults. For example, astagingenvironment can overrideBASE_URLto point at your staging server while inheriting all other default variables. - Schedules and event triggers can be pinned to a specific environment, so your nightly regression always runs against production while PR-triggered runs use staging.
