Selecting Browsers
Marketrix QA supports three browser engines:| Browser | Engine | Best For |
|---|---|---|
| Chromium | Chrome/Edge | Default choice, widest user base |
| Firefox | Gecko | Catching Firefox-specific rendering issues |
| WebKit | Safari | Testing Safari and iOS compatibility |
- Single browser — Pick one and run. Fastest option.
- Multiple browsers — Select any combination of Chromium, Firefox, and WebKit.
- Parallel execution — Run all selected browsers at the same time instead of sequentially.
Configuring Browser Settings
On your document’s detail page, look for the browser configuration section. You can set:- Which browsers to include
- Whether to run in parallel
- Whether to stop all browsers on the first failure (fail fast mode)
- Timeout per browser (between 60 seconds and 1 hour)
Starting a Test Run
- Navigate to your document in Quality Checking
- Click Run Tests or Run Cross-Browser
- Confirm your browser selection
- The run starts immediately
Monitoring Progress
While tests are running, the dashboard shows real-time progress:- Overall run status — Pending, Running, Completed, or Failed
- Individual test status — Each test case shows its own progress
- Step-by-step log — See which step each test is on, with timestamps
- Live updates — The page refreshes automatically as tests progress
Understanding Run Statuses
| Status | Meaning |
|---|---|
| Pending | Run is queued and waiting to start |
| Running | Tests are actively executing in the browser |
| Completed | All tests have finished (check individual results for pass/fail) |
| Failed | The run encountered a critical error and could not continue |
Understanding Test Statuses
Individual test cases within a run have their own statuses:| Status | Meaning |
|---|---|
| Pending | Test has not started yet |
| Running | Test is actively executing steps |
| Completed | Test finished — check the result to see if it passed or failed |
| Failed | Test encountered an error during execution |
Stopping a Running Test
If you need to stop a test run in progress, you can cancel it from the run’s detail page. Any completed tests within the run will keep their results. Tests that were still pending or running will be marked accordingly.Self-Healing: Automatic Test Repair
This is one of the most valuable features of Marketrix QA. When a test fails because your UI has changed — a button moved, a selector changed, a form field was renamed — the system can automatically attempt to fix the test.How Self-Healing Works
- Failure detected — A test fails due to a missing element, changed selector, or broken assertion
- Analysis — The system identifies the type of failure (locator, assertion, timeout, flow change, or environment issue)
- Repair attempt — A new approach is generated with a confidence score
- Validation — The repair is tested to verify it actually works
- Approval workflow — You review and approve or reject the fix
Failure Types the System Can Heal
| Failure Type | What Happened | How It Heals |
|---|---|---|
| Locator | A CSS selector or element ID changed | Finds the new selector for the same element |
| Assertion | Expected text or value changed | Updates the expected value |
| Timeout | Page took too long to load | Adjusts wait times |
| Flow Change | Navigation or workflow steps changed | Updates the step sequence |
| Environment | Infrastructure or config issue | Adjusts environment-specific settings |
Reviewing Healing Attempts
When a self-healing attempt is made, you will see it in the test’s healing history. Each attempt shows:- What failed and why
- The proposed repair strategy
- A confidence score (0 to 1) — higher is better
- The validation status (pending, validated, failed, or rejected)
Healing Configuration
You can tune healing behavior per test:- Enable/disable healing — Turn self-healing on or off
- Auto-apply — Automatically apply high-confidence repairs without manual approval
- Confidence threshold — Set the minimum confidence score for repairs (default: 0.85)
- Max attempts — Limit how many healing attempts per failure (default: 2, max: 5)
Troubleshooting
Tests are stuck on “Pending”:- Check that your connection is active and the target URL is accessible.
- Verify your plan supports the number of concurrent runs you are attempting.
- Your connection credentials may have expired. Update them on the Connections page.
- The target application may be down. Try accessing the URL manually.
- This usually indicates a real cross-browser compatibility issue in your application.
- Check the screenshots to see what looks different.
- These are legitimate bugs worth investigating.
- The UI change may be too large for automatic repair. Manually update the test steps.
- Try re-uploading an updated document to regenerate the affected tests.

