# Split Testing API

The Split Testing API lets you programmatically get AI-powered A/B test recommendations and create split tests (no-code visual tests and redirect tests) — all from your AI coding agent or any HTTP client. This is ideal for teams that want to automate conversion rate optimization (CRO) workflows using tools like Claude Code, Cursor, or custom scripts.

> **Base URL**: `https://app.humblytics.com/api/external/v1`

> **New to split testing?** Read the [Split Testing Overview](https://docs.humblytics.com/split-testing-overview) for conceptual background on how Humblytics A/B testing works, goal types, and statistical methodology before diving into the API.

> **Looking for analytics endpoints?** The [External Analytics API](https://docs.humblytics.com/external-analytics-api) covers traffic, pages, clicks, forms, and funnels data — useful for monitoring experiment performance alongside these split testing endpoints.

***

## Use Cases

### Automated CRO with AI Agents

The Split Testing API is designed to work hand-in-hand with AI coding agents. Here are the primary use cases:

| Use Case                           | How It Works                                                                                                                                            |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **AI-driven test ideation**        | Your agent calls the recommendations endpoint, receives data-backed suggestions (selectors, copy, CSS changes), and presents them for review.           |
| **Hands-free experiment creation** | After reviewing recommendations, the agent creates a split test via the API — no manual setup in the dashboard required.                                |
| **Continuous optimization loops**  | An agent periodically fetches recommendations for key pages, creates tests for high-confidence suggestions, and monitors results via the analytics API. |
| **Multi-page audit**               | The agent iterates over your top landing pages, fetches recommendations for each, and builds a prioritized optimization backlog.                        |

***

## Authentication

Uses the same property-scoped API keys as the External Analytics API.

```http
Authorization: Bearer <your_api_key>
```

See [External Analytics API — Authentication](https://docs.humblytics.com/external-analytics-api#authentication) for key generation and management details.

***

## Get Split Test Recommendations

```
GET /properties/{propertyId}/split-test-recommendations
```

Returns AI-generated A/B test recommendations for a specific page, based on your existing analytics data (traffic patterns, click behavior, bounce rates, and more). Each recommendation includes the exact CSS selector, change type, control value, and suggested variant value — ready to feed directly into the split test creation endpoint.

### Query Parameters

| Name   | Type   | Required | Default | Notes                                                                |
| ------ | ------ | -------- | ------- | -------------------------------------------------------------------- |
| `page` | string | Yes      | N/A     | The page path to get recommendations for (e.g. `/home`, `/pricing`). |

### Sample Request

```bash
curl \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-test-recommendations?page=/home"
```

### Sample Response

```json
{
  "meta": {
    "property_id": "prop_abc123",
    "page": "/home",
    "page_url": "https://yoursite.com/home"
  },
  "overall_analysis": "Your hero section has high traffic but a 78% bounce rate. The primary CTA is underperforming relative to page views, suggesting copy and visual prominence changes could yield significant gains.",
  "key_insights": [
    "CTA button receives only 3% of clicks despite above-the-fold placement",
    "Users spend an average of 4.2 seconds on the hero section before scrolling",
    "Mobile bounce rate is 12% higher than desktop on this page"
  ],
  "recommendations": [
    {
      "id": "rec_1",
      "title": "Hero CTA Text",
      "element_selector": "button.hero-cta",
      "element_type": "button",
      "change_type": "text",
      "control_value": "Learn More",
      "variant_value": "Start Free Trial",
      "confidence_score": 84,
      "expected_impact": "High: 8%+"
    },
    {
      "id": "rec_2",
      "title": "Hero CTA Color",
      "element_selector": "button.hero-cta",
      "element_type": "button",
      "change_type": "css",
      "control_value": "{ \"backgroundColor\": \"#6b7280\" }",
      "variant_value": "{ \"backgroundColor\": \"#ff6b35\" }",
      "confidence_score": 71,
      "expected_impact": "Medium: 3-8%"
    }
  ]
}
```

### Response Fields

| Field                                | Description                                                                                  |
| ------------------------------------ | -------------------------------------------------------------------------------------------- |
| `overall_analysis`                   | A human-readable summary of the page's performance and optimization opportunities.           |
| `key_insights`                       | Array of specific data-backed observations about user behavior on the page.                  |
| `recommendations[].id`               | Unique recommendation ID (useful for logging and tracking).                                  |
| `recommendations[].element_selector` | CSS selector for the target element — use this directly in the split test creation endpoint. |
| `recommendations[].element_type`     | The type of HTML element (e.g. `button`, `heading`, `image`).                                |
| `recommendations[].change_type`      | The kind of change: `text` (copy change) or `css` (style change).                            |
| `recommendations[].control_value`    | The current value on your live page.                                                         |
| `recommendations[].variant_value`    | The suggested new value to test against the control.                                         |
| `recommendations[].confidence_score` | 0–100 score indicating how confident the AI is that this change will improve performance.    |
| `recommendations[].expected_impact`  | Estimated impact level: `Low: <3%`, `Medium: 3-8%`, or `High: 8%+`.                          |

### Notes

* Recommendations are generated from your actual analytics data — they are not generic suggestions.
* Higher `confidence_score` values indicate stronger data signals behind the recommendation.
* The `element_selector` and `change_type` fields map directly to the `selector` and `op` fields in the split test creation endpoint, making it easy to pipe recommendations into live experiments.

***

## Create a Split Test

```
POST /properties/{propertyId}/split-tests
```

Creates and activates an A/B split test. Two test types are supported:

* **`nocode`** — Visual/DOM test. The Humblytics script injects element-level changes (text swaps, CSS overrides, HTML replacements, image swaps, element deletions, or duplications) at runtime — no code deployment required.
* **`redirect`** — Redirect test. Visitors are sent to entirely different page URLs instead of seeing DOM changes on the same page.

### Request Body

| Field            | Type   | Required | Default | Notes                                                         |
| ---------------- | ------ | -------- | ------- | ------------------------------------------------------------- |
| `name`           | string | Yes      | N/A     | A descriptive name for the experiment.                        |
| `page`           | string | Yes      | N/A     | The page path to run the test on (e.g. `/home`).              |
| `type`           | string | Yes      | N/A     | `nocode` for visual/DOM tests, `redirect` for redirect tests. |
| `goal`           | string | No       | N/A     | The optimization goal. See goal options below.                |
| `auto_stop_days` | number | No       | N/A     | Automatically stop the test after this many days.             |
| `variants`       | array  | Yes      | N/A     | One or more variant definitions (see below).                  |

**Goal Options**

| Value                  | Description                                     |
| ---------------------- | ----------------------------------------------- |
| `click_through`        | Optimize for click-through rate on the page.    |
| `form_submission`      | Optimize for form submission rate.              |
| `bounce_rate`          | Optimize for reduced bounce rate.               |
| `session_time`         | Optimize for increased session duration.        |
| `destination_page`     | Optimize for visitors reaching a specific page. |
| `external_destination` | Optimize for visitors reaching an external URL. |
| `revenue`              | Optimize for revenue per visitor.               |

### No-Code Variant Object (`type: "nocode"`)

| Field     | Type   | Required | Notes                                                               |
| --------- | ------ | -------- | ------------------------------------------------------------------- |
| `label`   | string | No       | A descriptive label for this variant (e.g. `Variant B — new copy`). |
| `changes` | array  | Yes      | One or more changes to apply. See below.                            |

**Change Object**

| Field      | Type             | Required | Notes                                                                                                                                                                                                         |
| ---------- | ---------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `selector` | string           | Yes      | CSS selector for the target element.                                                                                                                                                                          |
| `op`       | string           | Yes      | The operation: `text` (replace text content), `css` (apply CSS styles), `html` (replace inner HTML), `image` (swap image source), `delete` (remove element), or `duplicate` (clone element).                  |
| `value`    | string or object | Yes      | The new value. For `css`, pass a JSON object of CSS properties. For `text`, `html`, and `image`, pass a string. For `delete` and `duplicate`, the value is ignored but still required (pass an empty string). |

### Redirect Variant Object (`type: "redirect"`)

| Field   | Type   | Required | Notes                                                                                               |
| ------- | ------ | -------- | --------------------------------------------------------------------------------------------------- |
| `label` | string | No       | A descriptive label for this variant.                                                               |
| `url`   | string | Yes      | The path to redirect to (e.g. `"/home-v2"`). Must differ from `page` and be unique across variants. |

### Sample Request — No-Code Test

```bash
curl -X POST \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Hero CTA Optimization",
    "page": "/home",
    "type": "nocode",
    "goal": "click_through",
    "auto_stop_days": 14,
    "variants": [
      {
        "label": "Variant B — new copy + color",
        "changes": [
          {
            "selector": "button.hero-cta",
            "op": "text",
            "value": "Start Free Trial"
          },
          {
            "selector": "button.hero-cta",
            "op": "css",
            "value": { "backgroundColor": "#ff6b35" }
          }
        ]
      }
    ]
  }' \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests"
```

### Sample Request — Redirect Test

```bash
curl -X POST \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Homepage Redesign Test",
    "page": "/home",
    "type": "redirect",
    "goal": "form_submission",
    "auto_stop_days": 14,
    "variants": [
      {
        "label": "Redesigned Homepage",
        "url": "/home-v2"
      }
    ]
  }' \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests"
```

### Sample Response (201 Created)

```json
{
  "experiment_id": "exp_xyz789",
  "name": "Hero CTA Optimization",
  "status": "active",
  "created_at": "2026-03-27T10:00:00.000Z",
  "variants": [
    {
      "label": "Control",
      "is_control": true,
      "changes": []
    },
    {
      "label": "Variant B — new copy + color",
      "is_control": false,
      "changes": [
        { "selector": "button.hero-cta", "op": "text", "value": "Start Free Trial" },
        { "selector": "button.hero-cta", "op": "css", "value": { "backgroundColor": "#ff6b35" } }
      ]
    }
  ]
}
```

### Notes

* A **Control** variant is automatically created — you only need to define the variants you want to test against it.
* The test goes **active immediately** upon creation. Visitors will start being assigned to variants right away.
* **No-code tests**: Changes are applied at runtime by the Humblytics script. No code changes or redeployments are needed on your site. You can combine multiple changes per variant (e.g. text + CSS on the same element, or changes across multiple elements).
* **Redirect tests**: Each variant's `url` must differ from the `page` and be unique across variants. The Humblytics script handles the redirect transparently.
* Use `auto_stop_days` to prevent tests from running indefinitely. If omitted, the test runs until manually stopped.

***

## List Split Tests

```
GET /properties/{propertyId}/split-tests
```

Returns all experiments for the property. Optionally filter by status.

### Query Parameters

| Name     | Type | Required | Default | Notes                             |
| -------- | ---- | -------- | ------- | --------------------------------- |
| `status` | enum | No       | N/A     | Filter by `active` or `complete`. |

### Sample Request

```bash
curl \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests?status=active"
```

### Sample Response

```json
{
  "data": [
    {
      "experiment_id": "exp_xyz789",
      "name": "Hero CTA Optimization",
      "page": "/home",
      "status": "active",
      "goal": "click_through",
      "created_at": "2026-03-27T10:00:00.000Z",
      "variant_count": 2
    }
  ]
}
```

***

## Get Split Test Details

```
GET /properties/{propertyId}/split-tests/{experimentId}
```

Returns full experiment details including per-variant metrics.

### Sample Request

```bash
curl \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests/exp_xyz789"
```

### Sample Response

```json
{
  "experiment_id": "exp_xyz789",
  "name": "Hero CTA Optimization",
  "page": "/home",
  "status": "active",
  "goal": "click_through",
  "created_at": "2026-03-27T10:00:00.000Z",
  "auto_stop_days": 14,
  "variants": [
    {
      "label": "Control",
      "is_control": true,
      "sessions": 1523,
      "conversions": 198,
      "conversion_rate": 0.13,
      "changes": []
    },
    {
      "label": "Variant B — new copy + color",
      "is_control": false,
      "sessions": 1489,
      "conversions": 247,
      "conversion_rate": 0.166,
      "changes": [
        { "selector": "button.hero-cta", "op": "text", "value": "Start Free Trial" },
        { "selector": "button.hero-cta", "op": "css", "value": { "backgroundColor": "#ff6b35" } }
      ]
    }
  ]
}
```

### Notes

* Per-variant metrics (`sessions`, `conversions`, `conversion_rate`) are returned so you can programmatically evaluate experiment performance.
* `conversion_rate` is a decimal (e.g. `0.166` = 16.6%).

***

## Update a Split Test

```
PATCH /properties/{propertyId}/split-tests/{experimentId}
```

Updates an active experiment. Only the fields you include in the body are changed.

### Request Body

| Field            | Type   | Required | Notes                                 |
| ---------------- | ------ | -------- | ------------------------------------- |
| `name`           | string | No       | Update the experiment name.           |
| `auto_stop_days` | number | No       | Update or set the auto-stop duration. |

### Sample Request

```bash
curl -X PATCH \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Hero CTA Optimization v2",
    "auto_stop_days": 21
  }' \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests/exp_xyz789"
```

### Sample Response

```json
{
  "experiment_id": "exp_xyz789",
  "name": "Hero CTA Optimization v2",
  "status": "active",
  "auto_stop_days": 21
}
```

***

## Stop a Split Test

```
POST /properties/{propertyId}/split-tests/{experimentId}/stop
```

Stops a running experiment. Optionally include a reason.

### Request Body

| Field    | Type   | Required | Notes                                  |
| -------- | ------ | -------- | -------------------------------------- |
| `reason` | string | No       | Optional reason for stopping the test. |

### Sample Request

```bash
curl -X POST \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "reason": "Variant B is a clear winner" }' \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests/exp_xyz789/stop"
```

### Sample Response

```json
{
  "experiment_id": "exp_xyz789",
  "status": "complete",
  "stopped_at": "2026-04-10T14:30:00.000Z",
  "reason": "Variant B is a clear winner"
}
```

### Notes

* Once stopped, a test cannot be restarted. Create a new test if you want to continue testing.
* If no reason is provided, the test is recorded as manually stopped.

***

## End-to-End Workflow: Recommendations to Live Test

The most powerful pattern is chaining both endpoints — fetch recommendations, then create a test from the highest-confidence suggestion:

### Step 1 — Get Recommendations

```bash
curl \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-test-recommendations?page=/home"
```

Review the `recommendations` array. Pick the entries with the highest `confidence_score` values.

### Step 2 — Create the Split Test

Use the `element_selector` as the `selector` and `change_type` as the `op`:

```bash
curl -X POST \
  -H "Authorization: Bearer $HUMBLYTICS_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Hero CTA Optimization",
    "page": "/home",
    "type": "nocode",
    "goal": "click_through",
    "auto_stop_days": 14,
    "variants": [
      {
        "label": "Variant B — new copy + color",
        "changes": [
          {
            "selector": "button.hero-cta",
            "op": "text",
            "value": "Start Free Trial"
          },
          {
            "selector": "button.hero-cta",
            "op": "css",
            "value": { "backgroundColor": "#ff6b35" }
          }
        ]
      }
    ]
  }' \
  "https://app.humblytics.com/api/external/v1/properties/PROPERTY_ID/split-tests"
```

### Step 3 — Monitor Results

Use the [External Analytics API](https://docs.humblytics.com/external-analytics-api) and the Humblytics dashboard to track experiment performance over time.

***

## Using the Split Testing API with AI Agents

The Split Testing API is purpose-built for AI agent workflows. Below are ready-to-use instructions for the most popular AI coding tools. You can also find pre-filled versions of these instructions (with your property ID and API key already populated) in the Humblytics dashboard under **Utilities → API Access → AI Agent Instructions**.

### Claude Code

Add the following to your project's `CLAUDE.md` file or paste it into your Claude Code session. Replace `PROPERTY_ID` with your actual property ID:

```
## Humblytics API

You have access to the Humblytics API for analytics and A/B testing. Use curl via the Bash tool to call these endpoints.

- Base URL: https://app.humblytics.com/api/external/v1
- Property ID: PROPERTY_ID
- API Key: stored in environment variable HUMBLYTICS_API_KEY
- Auth header: Authorization: Bearer $HUMBLYTICS_API_KEY

### Split Testing Endpoints

1. GET /properties/{propertyId}/split-test-recommendations?page={page}
   - Returns AI-powered A/B test recommendations for a page
   - Response includes: overall_analysis, key_insights, recommendations[] with element_selector, change_type (text|css), control_value, variant_value, confidence_score (0-100), expected_impact

2. POST /properties/{propertyId}/split-tests
   - Creates and activates a split test (goes live immediately). Two types supported:
   - type: "nocode" — Visual/DOM test. Body: { name, page, type: "nocode", goal?, auto_stop_days?, variants: [{ label?, changes: [{ selector, op, value }] }] }
     - op is one of: text | css | html | image | delete | duplicate
   - type: "redirect" — Redirect test. Body: { name, page, type: "redirect", goal?, auto_stop_days?, variants: [{ label?, url }] }
     - url is the path to redirect to (must differ from page and be unique across variants)
   - Goal options: click_through, form_submission, bounce_rate, session_time, destination_page, external_destination, revenue
   - Map recommendation fields: element_selector → selector, change_type → op, variant_value → value

### Split Test Management Endpoints

3. GET /properties/{propertyId}/split-tests?status=active|complete
   - List all experiments, optionally filtered by status

4. GET /properties/{propertyId}/split-tests/{experimentId}
   - Get full experiment details with per-variant metrics

5. PATCH /properties/{propertyId}/split-tests/{experimentId}
   - Update an active experiment. Body: { name?, auto_stop_days? }

6. POST /properties/{propertyId}/split-tests/{experimentId}/stop
   - Stop a running experiment. Body: { reason? }

### Analytics Endpoints (for monitoring)

7. GET /properties/{propertyId}/traffic/trends?start={date}&end={date}&granularity=day
   - Time-series page views and unique visitors

8. GET /properties/{propertyId}/traffic/summary?start={date}&end={date}
   - Aggregate metrics: pageviews, sessions, bounce rate, avg session duration

9. GET /properties/{propertyId}/traffic/realtime
   - Live visitor count and feed (no date params needed)

10. GET /properties/{propertyId}/traffic/breakdown?start={date}&end={date}&limit=20
    - Top traffic segments: UTM, countries, devices, landing pages, referrers

11. GET /properties/{propertyId}/traffic/entry-exit-pages?start={date}&end={date}
    - Top entry and exit pages

12. GET /properties/{propertyId}/pages/breakdown?start={date}&end={date}
    - All page performance: views, visitors, scroll depth, bounce rate

13. GET /properties/{propertyId}/pages/details?page={path}&start={date}&end={date}
    - Single page deep dive with UTM, device, country breakdowns

14. GET /properties/{propertyId}/clicks/breakdown?start={date}&end={date}
    - Click events grouped by page and target

15. GET /properties/{propertyId}/clicks/details?page={path}&start={date}&end={date}
    - Clicks for a specific page with UTM attribution

16. GET /properties/{propertyId}/forms/breakdown?start={date}&end={date}
    - Form submissions grouped by page and form target

17. GET /properties/{propertyId}/forms/details?page={path}&start={date}&end={date}
    - Form details for a specific page with conversion rates

18. GET /properties/{propertyId}/funnels?steps={json}&start={date}&end={date}&mode=sequential
    - Run a funnel query with defined steps

19. GET /properties/{propertyId}/funnels/suggestions?page={path}
    - AI-generated funnel suggestions for a page

### Workflow

When asked to optimize a page or run an A/B test:
1. Fetch recommendations for the target page using the recommendations endpoint
2. Present the recommendations to the user with confidence scores and expected impact
3. Ask which recommendations to implement (or suggest high-confidence ones above 70)
4. Create the split test using the selectors and values from the chosen recommendations
5. Confirm the experiment is active and provide the experiment ID
6. Use the split test details endpoint to monitor per-variant metrics

When asked to check analytics or site performance:
1. Use traffic/summary for a quick overview, traffic/trends for time-series data
2. Use traffic/breakdown for segment analysis
3. Use pages/breakdown or pages/details for page-level analysis
4. Use clicks/breakdown or forms/breakdown for event-level data
5. Use funnels to analyze conversion paths
```

### Cursor

Add the following to your project's `.cursor/rules` file or `.cursorrules`. Replace `PROPERTY_ID` with your actual property ID:

```
## Humblytics API

You have access to the Humblytics API for analytics and A/B testing. Call these endpoints using curl or fetch.

- Base URL: https://app.humblytics.com/api/external/v1
- Property ID: PROPERTY_ID
- API Key: stored in environment variable HUMBLYTICS_API_KEY
- Auth header: Authorization: Bearer $HUMBLYTICS_API_KEY

### Split Testing Endpoints

1. GET /properties/{propertyId}/split-test-recommendations?page={page}
   - Returns AI-powered A/B test recommendations for a page
   - Response includes: overall_analysis, key_insights, recommendations[] with element_selector, change_type (text|css), control_value, variant_value, confidence_score (0-100), expected_impact

2. POST /properties/{propertyId}/split-tests
   - Creates and activates a split test (goes live immediately). Two types:
   - type: "nocode" — Visual/DOM test. Body: { name, page, type: "nocode", goal?, auto_stop_days?, variants: [{ label?, changes: [{ selector, op, value }] }] }
     - op is one of: text | css | html | image | delete | duplicate
   - type: "redirect" — Redirect test. Body: { name, page, type: "redirect", goal?, auto_stop_days?, variants: [{ label?, url }] }
     - url is the path to redirect to (must differ from page and be unique across variants)
   - Goal options: click_through, form_submission, bounce_rate, session_time, destination_page, external_destination, revenue
   - Map recommendation fields: element_selector → selector, change_type → op, variant_value → value

### Split Test Management Endpoints

3. GET /properties/{propertyId}/split-tests?status=active|complete — List experiments
4. GET /properties/{propertyId}/split-tests/{experimentId} — Experiment details with per-variant metrics
5. PATCH /properties/{propertyId}/split-tests/{experimentId} — Update experiment { name?, auto_stop_days? }
6. POST /properties/{propertyId}/split-tests/{experimentId}/stop — Stop experiment { reason? }

### Analytics Endpoints (for monitoring)

7. GET /properties/{propertyId}/traffic/trends?start={date}&end={date}&granularity=day
8. GET /properties/{propertyId}/traffic/summary?start={date}&end={date}
9. GET /properties/{propertyId}/traffic/realtime — no date params
10. GET /properties/{propertyId}/traffic/breakdown?start={date}&end={date}&limit=20
11. GET /properties/{propertyId}/traffic/entry-exit-pages?start={date}&end={date}
12. GET /properties/{propertyId}/pages/breakdown?start={date}&end={date}
13. GET /properties/{propertyId}/pages/details?page={path}&start={date}&end={date}
14. GET /properties/{propertyId}/clicks/breakdown?start={date}&end={date}
15. GET /properties/{propertyId}/clicks/details?page={path}&start={date}&end={date}
16. GET /properties/{propertyId}/forms/breakdown?start={date}&end={date}
17. GET /properties/{propertyId}/forms/details?page={path}&start={date}&end={date}
18. GET /properties/{propertyId}/funnels?steps={json}&start={date}&end={date}&mode=sequential
19. GET /properties/{propertyId}/funnels/suggestions?page={path}

### Workflow

When asked to optimize a page or run an A/B test:
1. Fetch recommendations for the target page
2. Present recommendations with confidence scores and expected impact
3. Ask which to implement (or suggest high-confidence ones above 70)
4. Create the split test using selectors and values from chosen recommendations
5. Confirm the experiment is active and provide the experiment ID
6. Use split test details endpoint to monitor per-variant metrics
```

### Generic (Any AI Agent)

For any other AI coding assistant, provide these instructions in its system prompt or project context:

```
You can interact with the Humblytics API to get data-driven A/B test recommendations,
create no-code experiments, and query analytics data.

Base URL: https://app.humblytics.com/api/external/v1
Auth: Bearer token via Authorization header

SPLIT TESTING:

STEP 1 — Get recommendations:
GET /properties/{propertyId}/split-test-recommendations?page={page}
Returns: overall_analysis, key_insights, and recommendations[] with element_selector,
change_type (text|css), control_value, variant_value, confidence_score, expected_impact.

STEP 2 — Create a split test:
POST /properties/{propertyId}/split-tests
Content-Type: application/json

Two types supported:

type: "nocode" — Visual/DOM test:
Body: {
  "name": "...",
  "page": "/...",
  "type": "nocode",
  "goal": "click_through|form_submission|bounce_rate|session_time|destination_page|external_destination|revenue",
  "auto_stop_days": 14,
  "variants": [{
    "label": "Variant B",
    "changes": [{ "selector": "...", "op": "text|css|html|image|delete|duplicate", "value": "..." }]
  }]
}

type: "redirect" — Redirect test:
Body: {
  "name": "...",
  "page": "/...",
  "type": "redirect",
  "goal": "...",
  "auto_stop_days": 14,
  "variants": [{
    "label": "Variant B",
    "url": "/page-v2"
  }]
}
Note: url must differ from page and be unique across variants.
goal and label are optional for both types.

Returns: experiment_id, status, variants[] with control auto-created.

Map recommendation fields to split test fields:
  element_selector → selector
  change_type → op
  variant_value → value

SPLIT TEST MANAGEMENT:
GET /properties/{propertyId}/split-tests?status=active|complete — List experiments
GET /properties/{propertyId}/split-tests/{experimentId} — Experiment details with per-variant metrics
PATCH /properties/{propertyId}/split-tests/{experimentId} — Update { name?, auto_stop_days? }
POST /properties/{propertyId}/split-tests/{experimentId}/stop — Stop experiment { reason? }

ANALYTICS (for monitoring):
GET /properties/{propertyId}/traffic/trends?start={date}&end={date}&granularity=day
GET /properties/{propertyId}/traffic/summary?start={date}&end={date}
GET /properties/{propertyId}/traffic/realtime — no date params needed
GET /properties/{propertyId}/traffic/breakdown?start={date}&end={date}&limit=20
GET /properties/{propertyId}/traffic/entry-exit-pages?start={date}&end={date}
GET /properties/{propertyId}/pages/breakdown?start={date}&end={date}
GET /properties/{propertyId}/pages/details?page={path}&start={date}&end={date}
GET /properties/{propertyId}/clicks/breakdown?start={date}&end={date}
GET /properties/{propertyId}/clicks/details?page={path}&start={date}&end={date}
GET /properties/{propertyId}/forms/breakdown?start={date}&end={date}
GET /properties/{propertyId}/forms/details?page={path}&start={date}&end={date}
GET /properties/{propertyId}/funnels?steps={json}&start={date}&end={date}&mode=sequential
GET /properties/{propertyId}/funnels/suggestions?page={path}

All endpoints require start and end dates in ISO-8601 format and an optional timezone parameter
(except /traffic/realtime and /funnels/suggestions which don't need date ranges).

Always present recommendations to the user before creating tests.
```

### Example Agent Conversation

Here is what a typical interaction looks like when an AI agent uses the Split Testing API:

**User**: "Optimize my homepage for more signups"

**Agent workflow**:

1. The agent calls `GET /properties/prop_abc123/split-test-recommendations?page=/home` and receives 3 recommendations.
2. The agent presents them:

   > I found 3 optimization recommendations for `/home`:
   >
   > 1. **Hero CTA Text** (confidence: 84) — Change "Learn More" → "Start Free Trial" — Expected impact: High (8%+)
   > 2. **Hero CTA Color** (confidence: 71) — Change gray background → orange — Expected impact: Medium (3-8%)
   > 3. **Subheading Copy** (confidence: 58) — Shorten subheading from 2 lines to 1 — Expected impact: Low (<3%)
   >
   > I'd recommend testing #1 and #2 together since they both target the hero CTA and have high confidence. Want me to create this experiment?
3. User confirms. The agent calls `POST /properties/prop_abc123/split-tests` with the selected recommendations mapped to a variant.
4. The agent confirms:

   > Experiment created and live! ID: `exp_xyz789`. It will auto-stop after 14 days. You can monitor results in the Humblytics dashboard under Experiments.

***

## Errors

The Split Testing API uses the same error format as the [External Analytics API](https://docs.humblytics.com/external-analytics-api#errors).

| Status | Code              | When it happens                                                                               |
| ------ | ----------------- | --------------------------------------------------------------------------------------------- |
| `400`  | `invalid_request` | Missing required fields, invalid page path, unsupported goal type, or malformed variant data. |
| `401`  | `unauthorized`    | Missing or invalid API key header.                                                            |
| `403`  | `forbidden`       | Property ID does not match the API key.                                                       |
| `404`  | `not_found`       | The specified page has no analytics data (recommendations endpoint only).                     |
| `429`  | `rate_limited`    | Too many requests in a short window.                                                          |
| `500`  | `internal_error`  | Unexpected server error. Retry later or contact support.                                      |

***

## Best Practices

* **Start with recommendations.** Always fetch recommendations before creating a test — they are based on your actual data and give you the highest-probability wins.
* **Test one hypothesis at a time.** While you can bundle multiple changes into a single variant, keep tests focused so you can attribute results clearly.
* **Set auto-stop durations.** Use `auto_stop_days` to prevent experiments from running longer than needed. 14 days is a good default for most sites.
* **Use high-confidence recommendations first.** Prioritize recommendations with `confidence_score` above 70 for the best chance of statistically significant results.
* **Monitor with the analytics API.** Combine the Split Testing API with the [External Analytics API](https://docs.humblytics.com/external-analytics-api) to track experiment performance programmatically.
* **Review before launching.** When using AI agents, always review recommendations before creating tests — the agent should present options and ask for confirmation.
