##### Two sample $z$ test: sampling distribution of the $z$ statistic

Definition of the sampling distribution of the $z$ statistic

##### Sampling distribution of $z$:

As you may know, when we perform a two sample $z$ test, we compute the $z$ statistic $$z = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$$ based on our group 1 and group 2 samples. Now suppose that we drew many more samples. Specifically, suppose that we drew an infinite number of group 1 and group 2 samples, each time of size $n_1$ and $n_2$. Each time we have a group 1 and group 2 sample, we could compute the $z$ statistic $z = \frac{\bar{y}_1 - \bar{y}_2}{\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}}$. Different samples would give different $z$ values. The distribution of all these $z$ values is the sampling distribution of $z$. Note that this sampling distribution is purely hypothetical. We would never really draw an infinite number of samples, but hypothetically, we could.

##### Sampling distribution of $z$ if H0 were true:

Suppose that the assumptions of the two sample $z$ test hold, and that the null hypothesis that $\mu_1 = \mu_2$ is true. Then the sampling distribution of $z$ is normal with mean 0 and standard deviation 1 (standard normal). That is, most of the time we would find $z$ values close to 0, and only sometimes we would find $z$ values further away from 0. If we find a $z$ value in our actual sample that is far away from 0, this is a rare event if the null hypothesis were true, and is therefore considered evidence against the null hypothesis ($z$ value in rejection region, small $p$ value).