Friedman test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Friedman test | Regression (OLS) | Two sample $t$ test - equal variances not assumed | Two sample $t$ test - equal variances not assumed |
|
---|---|---|---|---|
Independent/grouping variable | Independent variables | Independent/grouping variable | Independent/grouping variable | |
One within subject factor ($\geq 2$ related groups) | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One categorical with 2 independent groups | One categorical with 2 independent groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One of ordinal level | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | $F$ test for the complete regression model:
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | $F$ test for the complete regression model:
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | |
Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | |
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | $F$ test for the complete regression model:
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$ | $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | |
n.a. | Sample standard deviation of the residuals $s$ | n.a. | n.a. | |
- | $\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | - | - | |
Sampling distribution of $Q$ if H0 were true | Sampling distribution of $F$ and of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | |
If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | Sampling distribution of $F$:
| Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to $k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$ or $k$ = the smaller of $n_1$ - 1 and $n_2$ - 1 First definition of $k$ is used by computer programs, second definition is often used for hand calculations. | Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to $k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$ or $k$ = the smaller of $n_1$ - 1 and $n_2$ - 1 First definition of $k$ is used by computer programs, second definition is often used for hand calculations. | |
Significant? | Significant? | Significant? | Significant? | |
If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| $F$ test:
| Two sided:
| Two sided:
| |
n.a. | $C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$ | Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$ | Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$ | |
- | Confidence interval for $\beta_k$:
| $(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | $(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | |
n.a. | Effect size | n.a. | n.a. | |
- | Complete model:
| - | - | |
n.a. | Visual representation | Visual representation | Visual representation | |
- | Regression equations with: | |||
n.a. | ANOVA table | n.a. | n.a. | |
- | - | - | ||
Example context | Example context | Example context | Example context | |
Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | Can mental health be predicted from fysical health, economic class, and gender? | Is the average mental health score different between men and women? | Is the average mental health score different between men and women? | |
SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Regression > Linear...
| Analyze > Compare Means > Independent-Samples T Test...
| Analyze > Compare Means > Independent-Samples T Test...
| |
Jamovi | Jamovi | Jamovi | Jamovi | |
ANOVA > Repeated Measures ANOVA - Friedman
| Regression > Linear Regression
| T-Tests > Independent Samples T-Test
| T-Tests > Independent Samples T-Test
| |
Practice questions | Practice questions | Practice questions | Practice questions | |