# Two sample t test - equal variances not assumed - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two sample $t$ test - equal variances not assumed
Two sample $t$ test - equal variances assumed
$z$ test for the difference between two proportions
Independent/grouping variableIndependent/grouping variableIndependent/grouping variable
One categorical with 2 independent groupsOne categorical with 2 independent groupsOne categorical with 2 independent groups
Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
AssumptionsAssumptionsAssumptions
• Within each population, the scores on the dependent variable are normally distributed
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
• Significance test: number of successes and number of failures are each 5 or more in both sample groups
• Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
• Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
n.a.Pooled standard deviationn.a.
-$s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$-
Sampling distribution of $t$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were true
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
$t$ distribution with $n_1 + n_2 - 2$ degrees of freedomApproximately the standard normal distribution
Significant?Significant?Significant?
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$$C\% confidence interval for \mu_1 - \mu_2Approximate C\% confidence interval for \pi_1 - \pi_2 (\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}} where the critical value t^* is the value under the t_{k} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. (\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}} where the critical value t^* is the value under the t_{n_1 + n_2 - 2} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. Regular (large sample): • (p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}} where the critical value z^* is the value under the normal curve with the area C / 100 between -z^* and z^* (e.g. z^* = 1.96 for a 95% confidence interval) With plus four method: • (p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}} where p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}, p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}, and the critical value z^* is the value under the normal curve with the area C / 100 between -z^* and z^* (e.g. z^* = 1.96 for a 95% confidence interval) n.a.Effect sizen.a. -Cohen's d: Standardized difference between the mean in group 1 and in group 2:$$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$Cohen's$d$indicates how many standard deviations$s_p$the two sample means are removed from each other. - Visual representationVisual representationn.a. - n.a.Equivalent toEquivalent to -One way ANOVA with an independent variable with 2 levels ($I$= 2): • two sided two sample$t$test is equivalent to ANOVA$F$test when$I$= 2 • two sample$t$test is equivalent to$t$test for contrast when$I$= 2 • two sample$t$test is equivalent to$t$test multiple comparisons when$I$= 2 OLS regression with one categorical independent variable with 2 levels: • two sided two sample$t$test is equivalent to$F$test regression model • two sample$t$test is equivalent to$t$test for regression coefficient$\beta_1$When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. Example contextExample contextExample context Is the average mental health score different between men and women?Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. SPSSSPSSSPSS Analyze > Compare Means > Independent-Samples T Test... • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2 • Continue and click OK Analyze > Compare Means > Independent-Samples T Test... • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2 • Continue and click OK SPSS does not have a specific option for the$z$test for the difference between two proportions. However, you can do the chi-squared test instead. The$p$value resulting from this chi-squared test is equivalent to the two sided$p$value that would have resulted from the$z$test. Go to: Analyze > Descriptive Statistics > Crosstabs... • Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s) • Click the Statistics... button, and click on the square in front of Chi-square • Continue and click OK JamoviJamoviJamovi T-Tests > Independent Samples T-Test • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable • Under Tests, select Welch's • Under Hypothesis, select your alternative hypothesis T-Tests > Independent Samples T-Test • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable • Under Tests, select Student's (selected by default) • Under Hypothesis, select your alternative hypothesis Jamovi does not have a specific option for the$z$test for the difference between two proportions. However, you can do the chi-squared test instead. The$p$value resulting from this chi-squared test is equivalent to the two sided$p$value that would have resulted from the$z$test. Go to: Frequencies > Independent Samples -$\chi^2\$ test of association
• Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
Practice questionsPractice questionsPractice questions