A/B test significance calculator

Is the conversion difference significant? Z-test for two proportions

A/B test significance — z-test for two proportions. Enter conversion counts and sample sizes — get p-value and whether the difference is significant.

Enter survey data

Variant A (control)
Variant B (test)

Assumptions and limitations

  • Two-tailed z-test for two proportions
  • Samples assumed independent
  • n × p > 5 for both variants
Significance
Enter data

In SurveyNinja, NPS, CSAT and more are calculated automatically. Create a survey in 5 minutes and get real-time analytics.

Start free

A/B test examples

1 Conversion: variant A vs B

Variant A: 100 conversions from 1000 (10%). Variant B: 130 conversions from 1000 (13%).
Difference 3 pp. Calculator gives z-score and p-value. If p < 0.05 the difference is significant at α = 0.05 — variant B is better from a statistical standpoint.

2 CTA clicks

Variant A: 45 clicks from 500 (9%). Variant B: 52 clicks from 500 (10.4%).
Small 1.4 pp difference — the calculator shows whether the sample is enough for significance.

3 Sign-ups (different volumes)

Variant A: 80 from 2000 (4%). Variant B: 120 from 2000 (6%).
2 pp difference. With this n the two-sample z-test shows if the difference is significant at α = 0.05.

4 Cart abandonment

Control: 200 abandonments from 1000 (20%). New funnel: 150 from 1000 (15%).
5 pp drop in abandonment — the calculator helps check if the improvement is significant.

5 Newsletter sign-up

Variant A: 30 sign-ups from 600 (5%). Variant B: 42 from 600 (7%).
2 pp increase — with 600 visits per variant power may be low for a significant result.

6 Purchase conversion

Old landing: 25 purchases from 500 (5%). New landing: 35 from 500 (7%).
+2 pp — enter the data and check p-value before deciding.

What to avoid

  1. Stopping the test early Wait for the planned sample size. Early stopping (peeking) distorts p-value and increases the risk of false positives.
  2. Comparing many groups without multiple comparison correction The more pairwise comparisons, the higher the risk of false positives. Fix one primary metric and sample size before launch.
  3. Confusing statistical and practical significance A difference can be significant (p < 0.05) but too small for business. Look at the difference in % and at minimum detectable effect.
  4. Not planning sample size before launch Calculate minimum size with the minimum sample for comparison calculator. Otherwise the test may miss a real effect.

FAQ about A/B test significance

Automate your metrics

In SurveyNinja, NPS, CSAT and more are calculated automatically. Create a survey in 5 minutes and get real-time analytics.

Start free