Interpreting A/A test results

An A/A test is a test where the variant group is the same as the control group. 

What is an A/A test for?

It may seem silly to test two groups that are essentially the same, but the goal of an A/A test isn't to test some change against a control group. The goal of an A/A test is to check the setup of your overall experiment. 

What does it mean when my two A/A test groups show statistically significant differences?

When testing random groups of users, no two groups will be exactly alike. Even when using proper testing methods, like large sample sizes and random selection, the metrics for each test group will likely show some minor differences. Some of these differences may even register as being statistically significant.

But, if many of the metrics between the two groups show statistically significant differences, there is likely a problem with the way the experiment was set up. If the number of metrics that show statistical significance in an A/A test goes way above or way below the Error Percentage, it means that there is likely inherent bias in the way users are grouped (i.e. randomizer is not functioning well).


Was this article helpful?
Have more questions? Submit a request