Two sample t test - equal variances not assumed - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Two sample $t$ test - equal variances not assumed | $z$ test for the difference between two proportions | Friedman test | Marginal Homogeneity test / Stuart-Maxwell test | Cochran's Q test |
|
---|---|---|---|---|---|
Independent/grouping variable | Independent/grouping variable | Independent/grouping variable | Independent variable | Independent/grouping variable | |
One categorical with 2 independent groups | One categorical with 2 independent groups | One within subject factor ($\geq 2$ related groups) | 2 paired groups | One within subject factor ($\geq 2$ related groups) | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One of ordinal level | One categorical with $J$ independent groups ($J \geqslant 2$) | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ | H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$ | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. | H1: not all population proportions are equal | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | If a failure is scored as 0 and a success is scored as 1:
$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$ Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores. Before computing $Q$, first exclude blocks with equal scores in all $k$ groups. | |
Sampling distribution of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of the test statistic if H0 were true | Sampling distribution of $Q$ if H0 were true | |
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to $k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$ or $k$ = the smaller of $n_1$ - 1 and $n_2$ - 1 First definition of $k$ is used by computer programs, second definition is often used for hand calculations. | Approximately the standard normal distribution | If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | Approximately the chi-squared distribution with $J - 1$ degrees of freedom | If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom | |
Significant? | Significant? | Significant? | Significant? | Significant? | |
Two sided:
| Two sided:
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| If we denote the test statistic as $X^2$:
| If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| |
Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$ | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | n.a. | n.a. | n.a. | |
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | Regular (large sample):
| - | - | - | |
Visual representation | n.a. | n.a. | n.a. | n.a. | |
- | - | - | - | ||
n.a. | Equivalent to | n.a. | n.a. | Equivalent to | |
- | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. | - | - | Friedman test, with a categorical dependent variable consisting of two independent groups. | |
Example context | Example context | Example context | Example context | Example context | |
Is the average mental health score different between men and women? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks? | |
SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Compare Means > Independent-Samples T Test...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| |
Jamovi | Jamovi | Jamovi | n.a. | Jamovi | |
T-Tests > Independent Samples T-Test
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| ANOVA > Repeated Measures ANOVA - Friedman
| - | Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |