One sample Wilcoxon signed-rank test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One sample Wilcoxon signed-rank test | Mann-Whitney-Wilcoxon test | Goodness of fit test | Friedman test |
|
---|---|---|---|---|
Independent variable | Independent/grouping variable | Independent variable | Independent/grouping variable | |
None | One categorical with 2 independent groups | None | One within subject factor ($\geq 2$ related groups) | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One of ordinal level | One of ordinal level | One categorical with $J$ independent groups ($J \geqslant 2$) | One of ordinal level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
Formulation 1:
|
| H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
Formulation 1:
|
| H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | |
Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | |
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| Two different types of test statistics can be used; both will result in the same test outcome. The first is the Wilcoxon rank sum statistic $W$:
Note: we could just as well base W and U on group 2. This would only 'flip' the right and left sided alternative hypotheses. Also, tables with critical values for $U$ are often based on the smaller of $U$ for group 1 and for group 2. | $X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells. | $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | |
Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $W$ and of $U$ if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $Q$ if H0 were true | |
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Sampling distribution of $W$:
Sampling distribution of $U$: For small samples, the exact distribution of $W$ or $U$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_W$ and $\sigma_U$ is more complicated. | Approximately the chi-squared distribution with $J - 1$ degrees of freedom | If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | |
Significant? | Significant? | Significant? | Significant? | |
For large samples, the table for standard normal probabilities can be used: Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
|
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| |
n.a. | Equivalent to | n.a. | n.a. | |
- | If there are no ties in the data, the two sided Mann-Whitney-Wilcoxon test is equivalent to the Kruskal-Wallis test with an independent variable with 2 levels ($I$ = 2). | - | - | |
Example context | Example context | Example context | Example context | |
Is the median mental health score of office workers different from $m_0 = 50$? | Do men tend to score higher on social economic status than women? | Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$? | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | |
SPSS | SPSS | SPSS | SPSS | |
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| |
Jamovi | Jamovi | Jamovi | Jamovi | |
T-Tests > One Sample T-Test
| T-Tests > Independent Samples T-Test
| Frequencies > N Outcomes - $\chi^2$ Goodness of fit
| ANOVA > Repeated Measures ANOVA - Friedman
| |
Practice questions | Practice questions | Practice questions | Practice questions | |