A/B Testing & Experiments

Measure the real impact of personalization with built-in split testing.

What is A/B Testing in GetIntent?

GetIntent has built-in A/B testing (called Experiments). By default, every personalization automatically runs as an A/B test — 90% of traffic sees the personalized version, 10% sees the original (control group). This gives you statistically valid conversion lift data without any extra setup.

For more control, you can create custom Experiments with multiple variants, custom traffic splits, and targeted audiences.

Automatic A/B Testing (Default)

Every page with GetIntent active automatically splits traffic. No setup is required — this happens out of the box.

  • 90% of visitors see the AI-personalized version
  • 10% see the original, unchanged page (control group)
  • Conversion rates are tracked for both groups
  • Statistical significance is calculated automatically

Custom Experiments

Requires Pro plan or higher.

Navigate to Dashboard > Experiments to get started.

Creating an Experiment

  1. Click "Create Experiment"
  2. Set a name and description
  3. Configure traffic percentage — what percentage of visitors enter this experiment
  4. Add variants — each can have different prompt modifiers that change how the AI personalizes:
    • "Emphasize urgency and limited-time offers"
    • "Focus on social proof and testimonials"
    • "Lead with pricing and value"
  5. Set variant weights (how traffic splits between variants)
  6. Optionally target by UTM source, medium, or campaign
  7. Set minimum sample size and confidence level (default: 95%)
  8. Launch the experiment

Experiment Variants

  • Control: The original, unpersonalized page. Always included automatically.
  • Custom variants: Each variant gets unique AI prompt modifiers that shape how the content is personalized.
  • You can have up to 20 variants per experiment.
  • Weights control traffic distribution (e.g., 50/25/25).

Statistical Significance

GetIntent calculates statistical significance automatically:

  • Default confidence level: 95% (p < 0.05)
  • Shows confidence intervals for each variant's conversion rate
  • Highlights the winning variant when significance is reached
  • Shows improvement percentage vs. control
  • Minimum sample size ensures reliable results

Experiment Lifecycle

  1. Draft — Created but not running. Edit freely.
  2. Running — Actively splitting traffic. Don't edit variants mid-experiment.
  3. Paused — Temporarily stopped. Traffic goes to the original page. Resume any time.
  4. Completed — Results are final. Review and apply learnings.

Auto-Optimization

Auto-optimization is an optional setting you can enable on any experiment.

  • When enabled, GetIntent automatically shifts more traffic to the winning variant as data comes in.
  • Reduces the cost of running losing variants.
  • Still maintains enough control traffic for valid measurement.

UTM Targeting

  • Target experiments to specific traffic sources.
  • Example: Only run this experiment for Google Ads traffic (utm_source=google).
  • Useful for testing different personalization strategies per channel.

Viewing Results

Each experiment shows the following for every variant:

  • Variant name
  • Impressions
  • Conversions
  • Conversion rate
  • Improvement vs. control
  • Statistical significance

You can export results as CSV for reporting and further analysis.

Tips

  • Let experiments run until statistical significance is reached — don't call winners early.
  • Test one variable at a time for the clearest results.
  • Start with high-traffic pages for faster results.
  • Use prompt modifiers to test different messaging angles, not just different words.