Two sample $t$ test - sampling distribution of the difference between two sample means, and its standard error

Definition of the sampling distribution of the difference between two sample means $ \bar{y}_1 - \bar{y}_2$, and its standard error


Sampling distribution of the difference between two sample means $ \bar{y}_1 - \bar{y}_2$:

When we draw a sample of size $ n_1$ from population 1, and a sample of size $ n_2$ from population 2, we can compute the mean of a variable $ y$ in sample 1 and in sample 2, and then compute the difference between the two sample means: $ \bar{y}_1 - \bar{y}_2$. Now suppose that we repeated these steps many times. Specifically, suppose that we drew an infinite number of of group 1 and group 2 samples, each time of size $ n_1$ and $ n_2$. Each time we have a group 1 and group 2 sample, we could compute the difference between the two sample means: $ \bar{y}_1 - \bar{y}_2$. Different samples would give different sample means and differences. The distribution of all these differences $ \bar{y}_1 - \bar{y}_2$ is the sampling distribution of $ \bar{y}_1 - \bar{y}_2$. Note that this sampling distribution is purely hypothetical. We would never really draw an infinite number of group 1 and group 2 samples, but hypothetically, we could.

Standard error:

Suppose that the assumptions of the two sample $ t$ test (not assuming equal population variances) hold:

  • Within population 1, the variable $ y$ is normally distributed with mean $\mu_1$ and standard deviation $\sigma_1$; within population 2, the variable $ y$ is normally distributed with mean $\mu_2$ and standard deviation $\sigma_2$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Then the sampling distribution of $ \bar{y}_1 - \bar{y}_2$ is normal with mean $\mu_1 - \mu_2$ and standard deviation $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$. Since the two sample $ t$ test does not make the assumption that the values of $\sigma_1$ and $\sigma_2$ are known (like the $ z$ test does), we need to:
  • estimate $\sigma_1$ with $ s_1$: the standard deviation of $ y$ in sample 1
  • estimate $\sigma_2$ with $ s_2$: the standard deviation of $ y$ in sample 2
  • estimate $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ with $ \sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$
That is, we estimate the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$, $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$, with $ \sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$. We call this estimated standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$ the standard error of $\bar{y}_1 - \bar{y}_2$.

Note that the $ t$ statistic $ t = \frac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}}$ thus indicates how many standard errors the observed difference $\bar{y}_1 - \bar{y}_2$ is removed from 0: the difference $\mu_1 - \mu_2$ according to the null hypothesis.