Split Testing API
The Split Testing API lets you programmatically get AI-powered A/B test recommendations and create no-code split tests — all from your AI coding agent or any HTTP client. This is ideal for teams that want to automate conversion rate optimization (CRO) workflows using tools like Claude Code, Cursor, or custom scripts.
Base URL:
https://app.humblytics.com/api/external/v1
New to split testing? Read the Split Testing Overview for conceptual background on how Humblytics A/B testing works, goal types, and statistical methodology before diving into the API.
Looking for analytics endpoints? The External Analytics API covers traffic, clicks, and forms data — useful for monitoring experiment performance alongside these split testing endpoints.
Use Cases
Automated CRO with AI Agents
The Split Testing API is designed to work hand-in-hand with AI coding agents. Here are the primary use cases:
AI-driven test ideation
Your agent calls the recommendations endpoint, receives data-backed suggestions (selectors, copy, CSS changes), and presents them for review.
Hands-free experiment creation
After reviewing recommendations, the agent creates a split test via the API — no manual setup in the dashboard required.
Continuous optimization loops
An agent periodically fetches recommendations for key pages, creates tests for high-confidence suggestions, and monitors results via the analytics API.
Multi-page audit
The agent iterates over your top landing pages, fetches recommendations for each, and builds a prioritized optimization backlog.
Authentication
Uses the same property-scoped API keys as the External Analytics API.
Authorization: Bearer <your_api_key>See External Analytics API — Authentication for key generation and management details.
Get Split Test Recommendations
Returns AI-generated A/B test recommendations for a specific page, based on your existing analytics data (traffic patterns, click behavior, bounce rates, and more). Each recommendation includes the exact CSS selector, change type, control value, and suggested variant value — ready to feed directly into the split test creation endpoint.
Query Parameters
page
string
Yes
N/A
The page path to get recommendations for (e.g. /home, /pricing).
Sample Request
Sample Response
Response Fields
overall_analysis
A human-readable summary of the page's performance and optimization opportunities.
key_insights
Array of specific data-backed observations about user behavior on the page.
recommendations[].id
Unique recommendation ID (useful for logging and tracking).
recommendations[].element_selector
CSS selector for the target element — use this directly in the split test creation endpoint.
recommendations[].element_type
The type of HTML element (e.g. button, heading, image).
recommendations[].change_type
The kind of change: text (copy change) or css (style change).
recommendations[].control_value
The current value on your live page.
recommendations[].variant_value
The suggested new value to test against the control.
recommendations[].confidence_score
0–100 score indicating how confident the AI is that this change will improve performance.
recommendations[].expected_impact
Estimated impact level: Low: <3%, Medium: 3-8%, or High: 8%+.
Notes
Recommendations are generated from your actual analytics data — they are not generic suggestions.
Higher
confidence_scorevalues indicate stronger data signals behind the recommendation.The
element_selectorandchange_typefields map directly to theselectorandopfields in the split test creation endpoint, making it easy to pipe recommendations into live experiments.
Create a Split Test
Creates and activates a no-code A/B split test. The test injects changes into the page at runtime using the Humblytics script — no code deployment or page duplication required. You define one or more variants, each with a list of element-level changes (text swaps, CSS overrides, or attribute changes).
Request Body
name
string
Yes
N/A
A descriptive name for the experiment.
page
string
Yes
N/A
The page path to run the test on (e.g. /home).
type
string
Yes
N/A
Test type. Use nocode for API-created tests.
goal
string
Yes
N/A
The optimization goal. See goal options below.
auto_stop_days
number
No
N/A
Automatically stop the test after this many days.
variants
array
Yes
N/A
One or more variant definitions (see below).
Goal Options
click_through
Optimize for click-through rate on the page.
form_submission
Optimize for form submission rate.
bounce_rate
Optimize for reduced bounce rate.
session_time
Optimize for increased session duration.
destination_page
Optimize for visitors reaching a specific page.
external_destination
Optimize for visitors reaching an external URL.
revenue
Optimize for revenue per visitor.
Variant Object
label
string
Yes
A descriptive label for this variant (e.g. Variant B — new copy).
changes
array
Yes
One or more changes to apply. See below.
Change Object
selector
string
Yes
CSS selector for the target element.
op
string
Yes
The operation: text (replace text content), css (apply CSS styles), or attr (set an HTML attribute).
value
string or object
Yes
The new value. For css, pass a JSON object of CSS properties. For text and attr, pass a string.
Sample Request
Sample Response (201 Created)
Notes
A Control variant is automatically created — you only need to define the variants you want to test against it.
The test goes active immediately upon creation. Visitors will start being assigned to variants right away.
Changes are applied at runtime by the Humblytics script. No code changes or redeployments are needed on your site.
You can combine multiple changes per variant (e.g. text + CSS on the same element, or changes across multiple elements).
Use
auto_stop_daysto prevent tests from running indefinitely. If omitted, the test runs until manually stopped.
End-to-End Workflow: Recommendations to Live Test
The most powerful pattern is chaining both endpoints — fetch recommendations, then create a test from the highest-confidence suggestion:
Step 1 — Get Recommendations
Review the recommendations array. Pick the entries with the highest confidence_score values.
Step 2 — Create the Split Test
Use the element_selector as the selector and change_type as the op:
Step 3 — Monitor Results
Use the External Analytics API and the Humblytics dashboard to track experiment performance over time.
Using the Split Testing API with AI Agents
The Split Testing API is purpose-built for AI agent workflows. Below are ready-to-use instructions for the most popular AI coding tools. You can also find pre-filled versions of these instructions (with your property ID and API key already populated) in the Humblytics dashboard under Utilities → API Access → AI Agent Instructions.
Claude Code
Add the following to your project's CLAUDE.md file or paste it into your Claude Code session. Replace PROPERTY_ID with your actual property ID:
Cursor
Add the following to your project's .cursor/rules file or .cursorrules. Replace PROPERTY_ID with your actual property ID:
Generic (Any AI Agent)
For any other AI coding assistant, provide these instructions in its system prompt or project context:
Example Agent Conversation
Here is what a typical interaction looks like when an AI agent uses the Split Testing API:
User: "Optimize my homepage for more signups"
Agent workflow:
The agent calls
GET /properties/prop_abc123/split-test-recommendations?page=/homeand receives 3 recommendations.The agent presents them:
I found 3 optimization recommendations for
/home:Hero CTA Text (confidence: 84) — Change "Learn More" → "Start Free Trial" — Expected impact: High (8%+)
Hero CTA Color (confidence: 71) — Change gray background → orange — Expected impact: Medium (3-8%)
Subheading Copy (confidence: 58) — Shorten subheading from 2 lines to 1 — Expected impact: Low (<3%)
I'd recommend testing #1 and #2 together since they both target the hero CTA and have high confidence. Want me to create this experiment?
User confirms. The agent calls
POST /properties/prop_abc123/split-testswith the selected recommendations mapped to a variant.The agent confirms:
Experiment created and live! ID:
exp_xyz789. It will auto-stop after 14 days. You can monitor results in the Humblytics dashboard under Experiments.
Errors
The Split Testing API uses the same error format as the External Analytics API.
400
invalid_request
Missing required fields, invalid page path, unsupported goal type, or malformed variant data.
401
unauthorized
Missing or invalid API key header.
403
forbidden
Property ID does not match the API key.
404
not_found
The specified page has no analytics data (recommendations endpoint only).
429
rate_limited
Too many requests in a short window.
500
internal_error
Unexpected server error. Retry later or contact support.
Best Practices
Start with recommendations. Always fetch recommendations before creating a test — they are based on your actual data and give you the highest-probability wins.
Test one hypothesis at a time. While you can bundle multiple changes into a single variant, keep tests focused so you can attribute results clearly.
Set auto-stop durations. Use
auto_stop_daysto prevent experiments from running longer than needed. 14 days is a good default for most sites.Use high-confidence recommendations first. Prioritize recommendations with
confidence_scoreabove 70 for the best chance of statistically significant results.Monitor with the analytics API. Combine the Split Testing API with the External Analytics API to track experiment performance programmatically.
Review before launching. When using AI agents, always review recommendations before creating tests — the agent should present options and ask for confirmation.
Last updated