Two sample t test - equal variances assumed - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two sample $t$ test - equal variances assumed
$z$ test for a single proportion
Independent variableIndependent variable
One categorical with 2 independent groupsNone
Dependent variableDependent variable
One quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesis
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
$\pi = \pi_0$
$\pi$ is the population proportion of "successes"; $\pi_0$ is the population proportion of successes according to the null hypothesis
Alternative hypothesisAlternative hypothesis
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
Two sided: $\pi \neq \pi_0$
Right sided: $\pi > \pi_0$
Left sided: $\pi < \pi_0$
AssumptionsAssumptions
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
    • Significance test: $N \times \pi_0$ and $N \times (1 - \pi_0)$ are each larger than 10
    • Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures in sample are each 15 or more
    • Plus four 90%, 95%, or 99% confidence interval: total sample size is 10 or more
  • Sample is a simple random sample from the population. That is, observations are independent of one another
If the sample size is too small for $z$ to be approximately normally distributed, the binomial test for a single proportion should be used.
Test statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
$z = \dfrac{p - \pi_0}{\sqrt{\dfrac{\pi_0(1 - \pi_0)}{N}}}$
$p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size
Pooled standard deviationn.a.
$s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$-
Sampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were true
$t$ distribution with $n_1 + n_2 - 2$ degrees of freedomApproximately standard normal
Significant?Significant?
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided:
$C\%$ confidence interval for $\mu_1 - \mu_2$Approximate $C\%$ confidence interval for $\pi$
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
Regular (large sample):
  • $p \pm z^* \times \sqrt{\dfrac{p(1 - p)}{N}}$
    where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
  • $p_{plus} \pm z^* \times \sqrt{\dfrac{p_{plus}(1 - p_{plus})}{N + 4}}$
    where $p_{plus} = \dfrac{X + 2}{N + 4}$ and $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
Effect sizen.a.
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Indicates how many standard deviations $s_p$ the two sample means are removed from each other
-
Visual representationn.a.
Two sample t test - equal variances assumed
-
Equivalent toEquivalent to
One way ANOVA with an independent variable with 2 levels ($I$ = 2):
  • two sided two sample $t$ test equivalent to ANOVA $F$ test when $I$ = 2
  • two sample $t$ test equivalent to $t$ test for contrast when $I$ = 2
  • two sample $t$ test equivalent to $t$ test multiple comparisons when $I$ = 2

OLS regression with one categorical independent variable with 2 levels:
  • two sided two sample $t$ test equivalent to $F$ test regression model
  • two sample $t$ test equivalent to $t$ test for regression coefficient $\beta_1$
  • When testing two sided: goodness of fit test, with categorical variable with 2 levels
  • When $N$ is large, the $p$ value from the $z$ test for a single proportion approaches the $p$ value from the binomial test for a single proportion. The $z$ test for a single proportion is just a large sample approximation of the binomial test for a single proportion.
Example contextExample context
Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.Is the proportion smokers amongst office workers different from $\pi_0 = .2$? Use the normal approximation for the sampling distribution of the test statistic.
SPSSSPSS
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
If computation time allows, SPSS will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
JamoviJamovi
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Student's (selected by default)
  • Under Hypothesis, select your alternative hypothesis
Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
Jamovi will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
Practice questionsPractice questions