One sample Wilcoxon signed-rank test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One sample Wilcoxon signed-rank test | $z$ test for the difference between two proportions | Paired sample $t$ test | Chi-squared test for the relationship between two categorical variables | Pearson correlation | McNemar's test |
|
---|---|---|---|---|---|---|
Independent variable | Independent/grouping variable | Independent variable | Independent /column variable | Variable 1 | Independent variable | |
None | One categorical with 2 independent groups | 2 paired groups | One categorical with $I$ independent groups ($I \geqslant 2$) | One quantitative of interval or ratio level | 2 paired groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent /row variable | Variable 2 | Dependent variable | |
One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One categorical with $J$ independent groups ($J \geqslant 2$) | One quantitative of interval or ratio level | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: there is no association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H0: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level. | Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:
Other formulations of the null hypothesis are:
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1: there is an association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H1 two sided: $\rho \neq \rho_0$ H1 right sided: $\rho > \rho_0$ H1 left sided: $\rho < \rho_0$ | The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0. Other formulations of the alternative hypothesis are:
| |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions of test for correlation | Assumptions | |
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells. | Test statistic for testing H0: $\rho = 0$:
| $X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0. | |
Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $t$ and of $z$ if H0 were true | Sampling distribution of $X^2$ if H0 were true | |
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Approximately the standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | Approximately the chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedom | Sampling distribution of $t$:
| If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom. If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$. | |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
For large samples, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| Two sided:
|
| $t$ Test two sided:
| For test statistic $X^2$:
| |
n.a. | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | $C\%$ confidence interval for $\mu$ | n.a. | Approximate $C$% confidence interval for $\rho$ | n.a. | |
- | Regular (large sample):
| $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
Then transform back to get the approximate $C$% confidence interval for $\rho$:
| - | |
n.a. | n.a. | Effect size | n.a. | Properties of the Pearson correlation coefficient | n.a. | |
- | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - |
| - | |
n.a. | n.a. | Visual representation | n.a. | n.a. | n.a. | |
- | - | - | - | - | ||
n.a. | Equivalent to | Equivalent to | n.a. | Equivalent to | Equivalent to | |
- | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. |
| - | OLS regression with one independent variable:
|
| |
Example context | Example context | Example context | Example context | Example context | Example context | |
Is the median mental health score of office workers different from $m_0 = 50$? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Is there an association between economic class and gender? Is the distribution of economic class different between men and women? | Is there a linear relationship between physical health and mental health? | Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders? | |
SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | |
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Correlate > Bivariate...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
T-Tests > One Sample T-Test
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| T-Tests > Paired Samples T-Test
| Frequencies > Independent Samples - $\chi^2$ test of association
| Regression > Correlation Matrix
| Frequencies > Paired Samples - McNemar test
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |