One sample Wilcoxon signed-rank test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One sample Wilcoxon signed-rank test | Cochran's Q test | One sample $t$ test for the mean |
|
---|---|---|---|
Independent variable | Independent/grouping variable | Independent variable | |
None | One within subject factor ($\geq 2$ related groups) | None | |
Dependent variable | Dependent variable | Dependent variable | |
One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$ | H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | H1: not all population proportions are equal | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| If a failure is scored as 0 and a success is scored as 1:
$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$ Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores. Before computing $Q$, first exclude blocks with equal scores in all $k$ groups. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | |
Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of $t$ if H0 were true | |
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | |
Significant? | Significant? | Significant? | |
For large samples, the table for standard normal probabilities can be used: Two sided:
| If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| Two sided:
| |
n.a. | n.a. | $C\%$ confidence interval for $\mu$ | |
- | - | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | |
n.a. | n.a. | Effect size | |
- | - | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | |
n.a. | n.a. | Visual representation | |
- | - | ![]() | |
n.a. | Equivalent to | n.a. | |
- | Friedman test, with a categorical dependent variable consisting of two independent groups. | - | |
Example context | Example context | Example context | |
Is the median mental health score of office workers different from $m_0 = 50$? | Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks? | Is the average mental health score of office workers different from $\mu_0 = 50$? | |
SPSS | SPSS | SPSS | |
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Compare Means > One-Sample T Test...
| |
Jamovi | Jamovi | Jamovi | |
T-Tests > One Sample T-Test
| Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| T-Tests > One Sample T-Test
| |
Practice questions | Practice questions | Practice questions | |