One sample Wilcoxon signed-rank test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One sample Wilcoxon signed-rank test | Wilcoxon signed-rank test | $z$ test for the difference between two proportions |
You cannot compare more than 3 methods |
---|---|---|---|
Independent variable | Independent variable | Independent/grouping variable | |
None | 2 paired groups | One categorical with 2 independent groups | |
Dependent variable | Dependent variable | Dependent variable | |
One of ordinal level | One quantitative of interval or ratio level | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | |
H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | |
Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $z$ if H0 were true | |
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Approximately the standard normal distribution | |
Significant? | Significant? | Significant? | |
For large samples, the table for standard normal probabilities can be used: Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| |
n.a. | n.a. | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | |
- | - | Regular (large sample):
| |
n.a. | n.a. | Equivalent to | |
- | - | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. | |
Example context | Example context | Example context | |
Is the median mental health score of office workers different from $m_0 = 50$? | Is the median of the differences between the mental health scores before and after an intervention different from 0? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | |
SPSS | SPSS | SPSS | |
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| |
Jamovi | Jamovi | Jamovi | |
T-Tests > One Sample T-Test
| T-Tests > Paired Samples T-Test
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| |
Practice questions | Practice questions | Practice questions | |