Humblytics
  • Overview
    • Introduction to Humblytics
    • Features Overview
    • Frequently Asked Questions
  • How to Get Started
    • Create a New Account
    • Add Humblytics Analytics to a Custom / Self-Hosted Site
    • How to Add Humblytics Analytics to Your Weblow Site
    • How to Add Humblytics Analytics to Your Framer Site
  • Split Testing Overview
    • How Humblytics Split Testing Works - Under the Hood
    • How to Setup a Split Test
    • Creating A/B Page Variants for Split Testing
    • How to Analyze Split Test Data
    • Using the Humblytics A/B Sample‑Size Calculator
    • Deciding How Long to Run an A/B Test
  • How to Track Custom Form Submissions
    • GoHighLevel
    • Tally.so
      • Webflow – How to Track Tally.so Form Submissions
      • Framer – How to Track Tally.so Form Submissions
      • Custom/Self-Hosted – How to Track Tally.so Form Submissions
    • Typeform
      • Webflow – How to Track Typeform Submissions
      • Framer – How to Track Typeform Submissions
      • Custom/Self-Hosted – How to Track Typeform Submissions
    • Cal.com
      • Webflow – How to Track Cal.com Booking Submissions
      • Framer – How to Track Cal.com Booking Submissions
      • Custom/Self-Hosted – How to Track Cal.com Booking Submissions
    • Hubspot
      • Framer – How to Track HubSpot Form Submissions
      • Custom/Self-Hosted – How to Track HubSpot Form Submissions
      • Webflow – How to Track HubSpot Form Submissions
  • How to Track Custom Click Events
    • How to Track Click Events on Custom / Self‑Hosted Site
    • How to Add Custom Event Tracking for Webflow Sites
    • How to Add Custom Event Tracking for Framer Sites
  • Understanding Your Data
    • Understanding Site Traffic
    • Understanding Pages Data
    • Understanding Click Data
    • Understanding Forms Data
    • Understanding Heatmap
    • Understanding Funnels
  • Campaign Tracking with UTM Links
Powered by GitBook
On this page
  • Measuring Split Test Performance
  • How Confidence Is Calculated (Under the Hood)

Was this helpful?

  1. Split Testing Overview

How Humblytics Split Testing Works - Under the Hood

When you run a split test with Humblytics, the logic is seamlessly integrated into our standard analytics script—no extra setup required. If a visitor qualifies for an experiment, they'll be automatically and smoothly directed to the correct version of the page based on your test configuration.

We've designed our split testing engine to prioritize performance, SEO, and user experience:

Performance-first optimization without trade-offs

Capability

What it means

Lightning-fast & lightweight

Adds only a few kilobytes and loads asynchronously, so it never blocks rendering

Seamless redirect handling

Variant redirects occur before paint to avoid visual flicker or flashes of original content

Analytics-aware event suppression

Silences events for the original page on redirect, preventing double-counting

Bot-safe by design

Skips testing logic for bots and crawlers, protecting search-engine visibility and metrics

SEO-friendly canonical control

Automatically points variant pages to the control with canonical tags; index only what you choose

Cookie-free audience assignment

Uses short-lived query parameters instead of cookies, preserving privacy and compliance

Flexible targeting

Choose session-level (new assignment each visit) or user-level (sticky experience) splits

SPA-compatible

Works across multi-page sites and single-page apps, tracking navigation changes automatically

Build, launch & learn—without slowing down your site or compromising SEO

We've built Humblytics Split Testing to give you powerful optimization tools with zero performance trade-offs. Whether you're testing headlines, layouts, or full-page experiences, you can accomplish it all without slowing down your site or compromising your SEO.

Measuring Split Test Performance

Once your test is live, Humblytics automatically tracks how each variant performs against your selected goal—whether that's a button click, form submission, or page visit. We handle all the heavy lifting in the background, so you can focus on results, not statistics.

Goal Tracking

Each split test has a primary goal—the action you're trying to optimize. For example:

  • Clicking a "Sign Up" button

  • Reaching a confirmation page

  • Submitting a contact form

You'll define this goal when setting up your test, and we'll track how often it occurs for each variant.

Conversion Rate & Lift

We calculate the conversion rate for each variant by dividing the number of goal completions by the number of views. From there, we show you:

  • Absolute performance (e.g., 12% vs. 10%)

  • Relative lift (e.g., Variant B is performing 20% better than the control)

Confidence & Declaring a Winner

To ensure differences aren't just due to random chance, we apply standard statistical techniques to estimate confidence. When one variant performs significantly better than the others with enough data behind it, we flag it as the likely winner.

While we aim to give you actionable results quickly, we're also cautious about jumping to conclusions too early. In general:

  • The more traffic you have, the faster we can detect a winner

  • If results are close, we'll wait for more data to improve accuracy

  • We visually show when a result is trending better—but not yet statistically significant

You'll always see a clear summary of which variant is winning, by how much, and how confident we are in the result.

Continuous Monitoring

You don't need to manually calculate anything - our dashboard keeps everything up to date in real time. You can check in at any time to see how your test is performing and decide whether to:

  • Let it run longer

  • Manually pick a winner

  • End the test and apply the changes

How Confidence Is Calculated (Under the Hood)

For each variant, we track:

  • Number of views (visitors)

  • Number of goal completions (conversions)

  • Conversion rate = conversions ÷ visitors

To compare performance between two variants (e.g., Control vs. Variant B), we calculate the confidence level using a two-proportion Z-test, which tells us how likely the observed difference in conversion rates is due to chance.

The Steps:

  1. Define conversion rates for both groups:

    • p₁ = conversions_A ÷ visitors_A

    • p₂ = conversions_B ÷ visitors_B

  2. Calculate pooled probability (the average conversion rate across both groups):

    • p = (conversions_A + conversions_B) ÷ (visitors_A + visitors_B)

  3. Compute standard error (SE):

    • SE = √[p × (1 - p) × (1/visitors_A + 1/visitors_B)]

  4. Calculate Z-score:

    • Z = (p₁ - p₂) ÷ SE

  5. Convert Z-score to confidence level using the cumulative distribution function (CDF) of the normal distribution.

The resulting confidence level represents the probability that the observed difference is statistically significant (i.e., unlikely to be due to random chance). For example, a Z-score of ±1.96 corresponds to approximately 95% confidence.

In the UI, we surface this confidence level with visual indicators (e.g., "95% confidence this variant performs better") and show trending results when the confidence threshold hasn't been reached yet.

PreviousSplit Testing OverviewNextHow to Setup a Split Test

Last updated 2 days ago

Was this helpful?