After your tests run, the results page gives you everything you need to understand what passed, what failed, and what to do next.

Viewing Test Run Results

  1. Navigate to Quality Checking in the sidebar
  2. Click on your document
  3. Select the run you want to review from the runs list
Each run shows a summary with:
  • Total tests — How many test cases were executed
  • Pass rate — Percentage of tests that passed
  • Failed count — Number of tests that did not pass
  • Browser — Which browser(s) the tests ran on
  • Duration — How long the run took from start to finish

Understanding Pass/Fail Status

Each test case within a run has a clear outcome:
ResultWhat It MeansWhat To Do
PassedAll steps completed and the expected outcome was verifiedNothing — your app works as expected
FailedOne or more steps could not complete, or the expected outcome was not metCheck the failure details and screenshots
When a run shows “Completed” status, that means all tests finished executing. Individual tests within the run may still have failed — the run status tracks execution, not pass/fail.

Reviewing Screenshots

Screenshots are captured during test execution, giving you visual proof of what happened at each stage. This is especially useful for:
  • Verifying failures — See the actual state of the page when a test failed
  • Cross-browser comparison — Compare how your app looks in Chromium vs. Firefox vs. WebKit
  • Sharing with your team — Show developers exactly what went wrong without lengthy explanations
Click on any test result to view its screenshot. If a test failed, the screenshot captures the moment of failure — the button that was missing, the error message that appeared, or the page that loaded incorrectly.

Failure Analysis

When a test fails, the system provides detailed analysis to help you understand and fix the problem:

What You Get

  • Failure type — Whether it was a locator issue (element not found), assertion failure (wrong value), timeout, flow change, or environment problem
  • Failure message — A clear description of what went wrong
  • Failure context — Additional details about the state of the page at the time of failure
  • Step-by-step log — See which steps passed and where exactly the failure occurred, with timestamps

Suggested Fixes

For each failure, the system may suggest a repair strategy:
  • Update locator — The element moved or its selector changed
  • Update assertion — The expected value needs adjusting
  • Update steps — The workflow has changed and steps need reordering
  • Skip test — The test may no longer be relevant
If self-healing is enabled, the system can attempt these fixes automatically. See Running Tests for details on the self-healing workflow.

Test Versioning

Every time a test case is modified — whether by you, through refinement, or by self-healing — a new version is created. This gives you a complete history of how each test has evolved.

Version History

Each version records:
  • Version number — Sequential, starting from 1
  • Change type — How the change happened (created, modified, self-healed, refined, or deleted)
  • Change reason — Why the change was made
  • Who changed it — The user who made the change, or “system” for self-healing
  • Full snapshot — The complete test case at that point in time (title, objective, steps, expected outcome, priority)

Comparing Versions

You can compare any two versions of a test case side by side to see exactly what changed. This is useful for:
  • Understanding what self-healing modified
  • Reviewing changes before approving them
  • Debugging why a test that used to pass is now failing

Rolling Back

If a change made things worse, roll back to any previous version:
  1. Open the test case
  2. View the version history
  3. Select the version you want to restore
  4. Click Rollback
The rollback creates a new version (so you do not lose the current state) and restores the test case to the selected version’s configuration.

Exporting and Sharing Results

Use test run results to communicate with your team:
  • Share the run URL — Each run has a unique page you can link to directly
  • Screenshots — Download failure screenshots to attach to bug reports
  • Progress logs — Reference the step-by-step logs when filing issues with developers

Cross-Browser Comparison

If you ran tests across multiple browsers, the comparison view shows:
  • Which tests passed on all browsers
  • Which tests failed on specific browsers only (indicating a cross-browser bug)
  • Side-by-side results for each browser

Continuous Improvement Tips

Run tests regularly. Schedule runs after each deployment to catch regressions early. Keep documents updated. When requirements change, upload updated documents and regenerate test cases. The versioning system ensures you never lose previous tests. Pay attention to patterns. If the same tests keep failing and self-healing, it may indicate an area of your application that needs more stability. Start small, then expand. Begin with your most critical user flows (signup, checkout, core features) and gradually add coverage for edge cases. Use priority levels. Focus on High-priority test failures first. Low-priority failures are worth tracking but may not need immediate action.

Next Steps