How to Analyze Split Test Data
Last updated
Last updated
Accessing Experiment Results
Log In: Access your Humblytics dashboard.
Navigate to Split Testing: Go to the "Split Testing" Tab section and select the experiment you want to analyze.
Overview Section
Experiment Summary: Review the experiment details including:
Experiment Name
Creation and End Dates
Randomization Type (Session Level or User Level)
Key Metrics: Examine the overall performance metrics for each variant:
Unique Visitors: Number of distinct users visiting each variant.
Page Views: Total page views for each variant.
Click Conversion Rate: Percentage of users who clicked on the targeted elements.
Form Conversion Rate: Percentage of users who submitted forms.
Bounce Rate: Percentage of users who left the site after viewing only one page.
Average Session Length: Average time spent on the site.
Average Scroll Depth: Average scroll depth indicating how far users scroll down the page.
Target Clicks Section
Click Targets: Identify the specific elements being tracked, such as buttons or links.
Click Data: Compare the number of clicks and conversion rates for each click target across variants:
URL A Clicks: Number of clicks on variant A.
URL B Clicks: Number of clicks on variant B.
Conversion Rates: Percentage of clicks that led to the desired action (e.g., form submission, purchase).
Form Submissions Section
Form Metrics: Review the form submission data:
Number of Submissions: Total form submissions for each variant.
Conversion Rates: Percentage of visitors who submitted forms.
Comparison: Compare form submission performance between variants to identify which variant led to more form submissions.
Real-Time Monitoring
Live Data: Monitor real-time data to track user interactions and conversions as they happen.
User Behavior: Observe how users interact with different variants to gain insights into their behavior.
Comparative Analysis
Visual Comparison: Use visual aids such as graphs and charts to compare the performance of each variant.
Detailed Metrics: Drill down into specific metrics to understand why one variant outperformed the other.
Determining the Winner
Statistical Significance: Ensure that the results are statistically significant before declaring a winner.
Winner Selection: Determine which variant performed better based on your predefined goals and key performance indicators (KPIs).
Implementing the Best Variant
Deploy the Winner: Implement the successful variant on your website.
Ongoing Monitoring: Continue to monitor the performance of the implemented variant to ensure sustained improvements.
Next Steps
Iteration: Use the insights gained from the analysis to design new experiments and continue optimizing your website.
Documentation: Document the results and key learnings from each experiment for future reference.
By following these steps, you can effectively analyze split test data, make data-driven decisions, and continuously improve your website's performance.