Skip to main content
The Bugzy dashboard at bugzy.ai is the primary interface for managing projects, reviewing test results, and configuring integrations. All sections are accessible from the left sidebar after signing in.

Projects

The main entry point. Lists all projects in your team with:
  • Project name and base URL
  • Last execution status and timestamp
  • Test coverage summary (total test cases, pass rate)
Create a project: Click New Project, enter a name and base URL. The base URL is where Bugzy points the browser during test generation and execution. Configure a project: Click into any project to access its test plans, test cases, runs, and settings.

Test Plans

View and manage test plans for the selected project.
  • Generate: Click Generate Test Plan and provide a product description. Bugzy explores your app and creates a structured test-plan.md.
  • View/Edit: Test plans display as rendered markdown. Edit directly in the dashboard or in your Git repo — changes sync both ways.
  • Regenerate: Trigger a new generation to update the plan based on application changes.
Each test plan contains test areas, scenarios, expected behaviors, and priority levels.

Test Cases

Lists all test cases for the project in a table view with:
  • Test case ID (e.g., TC-001)
  • Title and description
  • Status (active, skipped, deprecated)
  • Last run result
  • Associated test area
Drawer view: Click any test case to open a detail drawer showing the full step-by-step description, preconditions, and expected results. Code view: Switch to the code tab to see the corresponding Playwright test script in tests/specs/. Individual run: Run a single test case directly from the drawer without triggering the full suite.

Test Runs

Execution history for the project, ordered by most recent.
  • Status: Running, Passed, Failed, Partially Passed
  • Counts: Total tests, passed, failed, skipped
  • Duration: Execution time in seconds
  • Triage summary: Product bugs found, test issues auto-fixed
Filtering: Use the filter bar above the test runs list to narrow results:
  • Status chips: Toggle Passed, Failed, or Timed Out to show only matching runs
  • Date range: Select a preset (Last 24h, 7 days, or 30 days) or view all time
  • Search: Type a test case name to find runs that included that test
Filters combine with AND logic and persist in the URL — share a filtered view by copying the link. Logs viewer: Click into any run to see the full execution log — step-by-step agent actions, test output, screenshots, and triage decisions. Trigger a run: Click Run Tests to start a new execution. Select tests by file pattern, tag, specific file, or run all.

Explorations

Browse exploration reports generated during onboarding and test generation tasks.
  • Application structure maps
  • Screenshots of discovered pages
  • Feature inventories
  • Navigation flow documentation
Explorations are useful for onboarding new team members or auditing what Bugzy has discovered about your app.

Connections

Manage integrations with external services. Each integration connects via Nango OAuth from the dashboard — no manual token management.
IntegrationPurposeSetup
GitHubRepository access, webhooks, PR commentsOAuth via GitHub App
SlackTest result notifications, team messagingOAuth via Nango
Microsoft TeamsTest result notifications, team messagingAzure Bot
Jira CloudBug filing, project contextOAuth via Nango
Jira ServerBug filing (on-premise)MCP Tunnel
Azure DevOpsBug filing, work item trackingOAuth via Nango
AsanaBug filing, task trackingOAuth via Nango
LinearBug filing, issue trackingOAuth via Nango
NotionDocumentation search for test contextOAuth via Nango
ConfluenceDocumentation searchOAuth via Nango
Zephyr ScaleTest case managementAPI token
Recall.aiMeeting transcriptionWebhook
Email notifications are always available for every project — no configuration needed. Click Connect next to any integration to start the OAuth flow. Once connected, the integration is available to all tasks in the project.

Scheduling

Configure automated task execution:

Scheduled tasks

Set up cron-based schedules for recurring test runs:
  • Frequency: Hourly, daily, weekly, or custom cron expression
  • Task: Which task to execute (e.g., run-tests, verify-changes)
  • Parameters: Test selection, target environment, notification preferences

Event triggers

Map external events to Bugzy tasks:
  • GitHub push to main -> Run full regression
  • GitHub PR opened -> Run related tests
  • Deployment webhook -> Run smoke tests
  • Slack message -> Handle via handle-message task
Event triggers are configurable per project — you define which events trigger which tasks with which parameters.

Settings

Project-level configuration:
  • Base URL: The application URL for browser automation
  • Environment variables: Key-value pairs injected into test execution (e.g., TEST_USER_EMAIL, TEST_BASE_URL)
  • Context files: Additional markdown files that provide the agent with project-specific knowledge
  • Test data: Manage .env.testdata values for non-secret test configuration
  • Project info: Name, description, and team assignment

Named environments

Environment variables support named environments — separate configurations for different targets like staging, production, or QA.
  • Default variables apply to all runs when no environment is specified.
  • Named environments (e.g., staging, production) override or extend the defaults. For example, a staging environment can override BASE_URL to point at your staging server while inheriting all other default variables.
  • Schedules and event triggers can be pinned to a specific environment, so your nightly regression always runs against production while PR-triggered runs use staging.
Create and manage environments in Settings > Environment Variables > Environments. Each project supports up to 10 named environments.