Wilcoxon signed-rank test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Wilcoxon signed-rank test | Logistic regression | Two sample $t$ test - equal variances assumed |
|
---|---|---|---|
Independent variable | Independent variables | Independent/grouping variable | |
2 paired groups | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One categorical with 2 independent groups | |
Dependent variable | Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | Model chi-squared test for the complete regression model:
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | Model chi-squared test for the complete regression model:
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | |
n.a. | n.a. | Pooled standard deviation | |
- | - | $s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$ | |
Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | Sampling distribution of $t$ if H0 were true | |
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| $t$ distribution with $n_1 + n_2 - 2$ degrees of freedom | |
Significant? | Significant? | Significant? | |
For large samples, the table for standard normal probabilities can be used: Two sided:
| For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| Two sided:
| |
n.a. | Wald-type approximate $C\%$ confidence interval for $\beta_k$ | $C\%$ confidence interval for $\mu_1 - \mu_2$ | |
- | $b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | $(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | |
n.a. | Goodness of fit measure $R^2_L$ | Effect size | |
- | $R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | Cohen's $d$: Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other. | |
n.a. | n.a. | Visual representation | |
- | - | ![]() | |
n.a. | n.a. | Equivalent to | |
- | - | One way ANOVA with an independent variable with 2 levels ($I$ = 2):
| |
Example context | Example context | Example context | |
Is the median of the differences between the mental health scores before and after an intervention different from 0? | Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women. | |
SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Regression > Binary Logistic...
| Analyze > Compare Means > Independent-Samples T Test...
| |
Jamovi | Jamovi | Jamovi | |
T-Tests > Paired Samples T-Test
| Regression > 2 Outcomes - Binomial
| T-Tests > Independent Samples T-Test
| |
Practice questions | Practice questions | Practice questions | |