Two sample t test - equal variances not assumed - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Two sample $t$ test - equal variances not assumed | $z$ test for the difference between two proportions | Marginal Homogeneity test / Stuart-Maxwell test | Sign test |
|
---|---|---|---|---|
Independent/grouping variable | Independent/grouping variable | Independent variable | Independent variable | |
One categorical with 2 independent groups | One categorical with 2 independent groups | 2 paired groups | 2 paired groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One categorical with $J$ independent groups ($J \geqslant 2$) | One of ordinal level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ |
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. |
| |
Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | |
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | $W = $ number of difference scores that is larger than 0 | |
Sampling distribution of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of the test statistic if H0 were true | Sampling distribution of $W$ if H0 were true | |
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to $k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$ or $k$ = the smaller of $n_1$ - 1 and $n_2$ - 1 First definition of $k$ is used by computer programs, second definition is often used for hand calculations. | Approximately the standard normal distribution | Approximately the chi-squared distribution with $J - 1$ degrees of freedom | The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | |
Significant? | Significant? | Significant? | Significant? | |
Two sided:
| Two sided:
| If we denote the test statistic as $X^2$:
| If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| |
Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$ | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | n.a. | n.a. | |
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | Regular (large sample):
| - | - | |
Visual representation | n.a. | n.a. | n.a. | |
![]() | - | - | - | |
n.a. | Equivalent to | n.a. | Equivalent to | |
- | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. | - |
Two sided sign test is equivalent to
| |
Example context | Example context | Example context | Example context | |
Is the average mental health score different between men and women? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Do people tend to score higher on mental health after a mindfulness course? | |
SPSS | SPSS | SPSS | SPSS | |
Analyze > Compare Means > Independent-Samples T Test...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| |
Jamovi | Jamovi | n.a. | Jamovi | |
T-Tests > Independent Samples T-Test
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| - | Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| |
Practice questions | Practice questions | Practice questions | Practice questions | |