After your agents complete simulations, review what they learned to verify accuracy and identify gaps.

What Results Include

Each simulation produces:
  • Exploration history — Pages visited, actions taken, screenshots captured, time spent
  • Knowledge graph — A structured map of what the agent learned about your application, including UI elements, workflows, feature relationships, and navigation patterns
  • Q&A log — Questions the agent asked and the responses you provided
  • Learning milestones — Key discoveries and insights

Reviewing Results

  1. Navigate to Simulations in your dashboard
  2. Click on a completed simulation to view details
  3. Review the activity log and screenshots

Key Things to Check

  • Completeness: Did the agent cover all intended areas?
  • Accuracy: Does the agent’s understanding match reality?
  • Depth: Is the knowledge specific enough to help real users?

Verifying Agent Learning

Test your agent in the Playground with questions related to what it explored:
  • “How do users create a new project?”
  • “Where can users find their order history?”
  • “What happens when someone tries to checkout without an account?”
Check that responses are specific (referencing actual UI elements), accurate (matching real workflows), and actionable (users can follow the guidance).

Improving Results

If you find gaps or inaccuracies, run follow-up simulations with more targeted instructions: For incomplete exploration:
Focus specifically on the checkout process. Learn the complete
workflow including cart management, payment processing, order
confirmation, and error handling.
For error scenarios:
Explore what happens when users encounter problems during
account creation. Learn about common error messages, recovery
procedures, and when to escalate to human support.

Continuous Improvement

  • Run simulations after product updates to keep agent knowledge current
  • Use common user questions to guide simulation focus
  • Explore areas where users report confusion
  • Verify existing knowledge remains accurate over time

Next Steps

  1. Test agent performance — Verify improved capabilities in the playground
  2. Create follow-up simulations — Address gaps or inaccuracies
  3. Update agent knowledge — Integrate new learning with existing knowledge
  4. Deploy to production — Use your improved agents to help real users