Skip to main content
After your agents complete simulations, it’s important to review what they learned and verify that their understanding is accurate. This guide covers how to interpret simulation results and ensure your agents have gained valuable knowledge about your product.

Understanding Simulation History

Simulation Timeline

Each simulation creates a detailed history showing: Exploration Sequence:
  • Pages visited and features explored
  • Actions taken and interactions performed
  • Screenshots captured during exploration
  • Time spent on different areas
Learning Milestones:
  • Key discoveries and insights
  • Understanding of user workflows
  • Recognition of interface elements
  • Mapping of feature relationships
Question and Answer Log:
  • Questions the agent asked during exploration
  • Responses you provided
  • How guidance influenced the exploration
  • Context that helped the agent learn

Result Categories

Interface Understanding:
  • Navigation patterns and menu structures
  • Button locations and functions
  • Form fields and their purposes
  • Modal dialogs and popup behaviors
Workflow Knowledge:
  • Step-by-step processes
  • User journey mapping
  • Decision points and branching
  • Error handling and recovery
Feature Relationships:
  • How different features connect
  • Dependencies between functions
  • Integration points and data flow
  • Cross-feature workflows

Reviewing Simulation Results

Accessing Results

  1. Navigate to Simulations in your Marketrix dashboard
  2. Click on a completed simulation to view details
  3. Review the simulation history and activity log
  4. Examine screenshots and interactions captured during exploration

Key Areas to Review

Completeness of Exploration:
  • Did the agent cover all intended areas?
  • Were important features and workflows explored?
  • Did the agent follow the instructions effectively?
  • Are there gaps in the exploration?
Accuracy of Understanding:
  • Does the agent’s interpretation match reality?
  • Are the workflows and processes correctly understood?
  • Did the agent identify the right user goals?
  • Are feature relationships accurately mapped?
Quality of Learning:
  • Did the agent gain actionable insights?
  • Is the knowledge specific and detailed enough?
  • Will this help the agent assist real users?
  • Are there areas that need additional exploration?

Verifying Agent Learning

Testing Agent Knowledge

In the Playground:
  1. Test specific scenarios the agent explored
  2. Ask questions about features the agent learned
  3. Request guidance on workflows the agent mapped
  4. Verify accuracy of agent responses
Example Test Questions:
  • “How do users create a new project?”
  • “What happens when someone tries to checkout without an account?”
  • “Where can users find their order history?”
  • “How do users reset their password?”

Validating Understanding

Check Response Quality:
  • Are answers specific and actionable?
  • Do responses reference actual interface elements?
  • Are workflows described accurately?
  • Can users follow the guidance successfully?
Verify Completeness:
  • Does the agent understand the complete process?
  • Are all necessary steps included?
  • Are edge cases and alternatives covered?
  • Is troubleshooting knowledge present?

Interpreting Simulation Data

Screenshot Analysis

Interface Elements:
  • Button names and locations
  • Form field labels and types
  • Navigation menu structures
  • Modal dialog content
User Context:
  • What the user sees at each step
  • Available options and choices
  • Visual cues and indicators
  • Error states and messages

Interaction Patterns

User Flows:
  • Logical progression through workflows
  • Decision points and branching paths
  • Required vs. optional steps
  • Alternative approaches and shortcuts
Navigation Behavior:
  • How users move between sections
  • Breadcrumb trails and back navigation
  • Menu structures and organization
  • Search and discovery patterns

Learning Indicators

Successful Learning:
  • Agent can describe specific interface elements
  • Agent understands complete workflows
  • Agent recognizes user goals and contexts
  • Agent can provide step-by-step guidance
Areas Needing Improvement:
  • Vague or generic responses
  • Missing steps in workflows
  • Incorrect interface descriptions
  • Lack of context awareness

Improving Simulation Results

Identifying Gaps

Incomplete Exploration:
  • Run additional simulations for missed areas
  • Provide more specific instructions
  • Focus on particular features or workflows
  • Test different user scenarios
Inaccurate Understanding:
  • Clarify instructions for better focus
  • Provide additional context during exploration
  • Answer agent questions more specifically
  • Guide agent attention to important details
Shallow Learning:
  • Extend simulation duration for deeper exploration
  • Ask agent to explore edge cases and alternatives
  • Test error scenarios and recovery procedures
  • Map feature relationships and dependencies

Running Follow-up Simulations

Targeted Exploration:
Focus specifically on the checkout process that was partially explored.
Learn the complete workflow including:
- Cart management and item review
- Payment processing and security
- Order confirmation and tracking
- Error handling and recovery
Error Scenario Testing:
Explore what happens when users encounter problems during account creation.
Learn about:
- Common error messages and their meanings
- Recovery procedures and solutions
- Alternative approaches when primary methods fail
- When to escalate to human support
Feature Integration:
Understand how the dashboard connects to other features.
Explore:
- Navigation patterns between sections
- Data flow and synchronization
- Feature dependencies and requirements
- Cross-functional workflows

Simulation Result Metrics

Exploration Coverage

Areas Explored:
  • Number of pages/features visited
  • Depth of exploration in each area
  • Time spent on different sections
  • Screenshots captured and analyzed
Learning Depth:
  • Number of workflows understood
  • Feature relationships mapped
  • User scenarios covered
  • Edge cases identified

Quality Indicators

Accuracy Metrics:
  • Correctness of interface descriptions
  • Accuracy of workflow steps
  • Proper understanding of user goals
  • Valid feature relationships
Completeness Metrics:
  • Coverage of intended exploration areas
  • Inclusion of all necessary steps
  • Recognition of important details
  • Understanding of context and alternatives

Best Practices for Result Review

Systematic Review Process

Initial Assessment:
  1. Review simulation timeline for overall progress
  2. Check exploration coverage against instructions
  3. Identify major discoveries and insights
  4. Note any gaps or incomplete areas
Detailed Analysis:
  1. Examine screenshots for interface understanding
  2. Review interaction patterns for workflow knowledge
  3. Test agent responses in the playground
  4. Validate accuracy of learned information
Action Planning:
  1. Identify areas needing additional exploration
  2. Plan follow-up simulations for gaps
  3. Update agent instructions based on findings
  4. Schedule regular reviews for ongoing improvement

Continuous Improvement

Regular Simulation Updates:
  • Run simulations after product updates
  • Test new features and workflows
  • Verify existing knowledge is still accurate
  • Explore areas where users struggle
Feedback Integration:
  • Use user questions to guide simulation focus
  • Explore scenarios reported in support tickets
  • Test solutions to common user problems
  • Validate agent responses with real user needs

Common Result Patterns

Successful Simulations

Indicators of Success:
  • Comprehensive exploration of intended areas
  • Accurate understanding of workflows
  • Specific knowledge of interface elements
  • Ability to provide helpful user guidance
What This Means:
  • Agent is ready to assist users effectively
  • Knowledge is current and accurate
  • Responses will be specific and actionable
  • Users will receive valuable help

Incomplete Simulations

Signs of Incomplete Learning:
  • Vague or generic responses
  • Missing steps in workflows
  • Incorrect interface descriptions
  • Lack of context awareness
How to Improve:
  • Run additional targeted simulations
  • Provide more specific instructions
  • Answer agent questions more thoroughly
  • Focus on particular areas of weakness

Inaccurate Simulations

Signs of Inaccurate Learning:
  • Incorrect workflow descriptions
  • Wrong interface element names
  • Misunderstood user goals
  • Invalid feature relationships
How to Correct:
  • Clarify instructions for better focus
  • Provide additional context and guidance
  • Test agent responses and correct errors
  • Run new simulations with improved instructions

Next Steps

After reviewing simulation results:
  1. Test agent performance - Verify improved capabilities in the playground
  2. Create follow-up simulations - Address any gaps or inaccuracies
  3. Update agent knowledge - Integrate new learning with existing knowledge
  4. Deploy to production - Use your improved agents to help real users
Effective result review ensures your simulations produce valuable learning for your agents. By systematically analyzing what agents learned and verifying accuracy, you’ll create agents that truly understand your product.

Getting Help

If you need assistance reviewing simulation results:
  • Check the troubleshooting guide for common issues
  • Review result interpretation best practices
  • Test agent responses to validate learning
  • Contact support for additional help
You’re now ready to review simulation results effectively and ensure your agents learn accurately! 🚀
I