One sample Wilcoxon signed-rank test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

One sample Wilcoxon signed-rank test
Sign test
Logistic regression
You cannot compare more than 3 methods
Independent variableIndependent variableIndependent variables
None2 paired groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Dependent variableDependent variableDependent variable
One of ordinal levelOne of ordinal levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesis
H0: $m = m_0$

Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis.
  • H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H0: the population median of the difference scores is equal to zero
A difference score is the difference between the first score of a pair and the second score of a pair.
Model chi-squared test for the complete regression model:
  • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $m \neq m_0$
H1 right sided: $m > m_0$
H1 left sided: $m < m_0$
  • H1 two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
  • H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
  • H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
  • H1 two sided: the population median of the difference scores is different from zero
  • H1 right sided: the population median of the difference scores is larger than zero
  • H1 left sided: the population median of the difference scores is smaller than zero
Model chi-squared test for the complete regression model:
  • H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • H1 right sided: $\beta_k > 0$
  • H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
AssumptionsAssumptionsAssumptions
  • The population distribution of the scores is symmetric
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
Test statisticTest statisticTest statistic
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
  1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score} - m_0)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
  2. For each subject, compute the absolute value of the difference score $|\mbox{score} - m_0|$.
  3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
  4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

  • $W_1 = \sum\, R_d^{+}$
    or
    $W_1 = \sum\, R_d^{-}$
    That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
    • Tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
    • If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
  • $W_2 = \sum\, \mbox{sign}_d \times R_d$
    That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
$W = $ number of difference scores that is larger than 0Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
Sampling distribution of $W_1$ and of $W_2$ if H0 were trueSampling distribution of $W$ if H0 were trueSampling distribution of $X^2$ and of the Wald statistic if H0 were true
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.

If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Significant?Significant?Significant?
For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
If $n$ is small, the table for the binomial distribution should be used:
Two sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Right sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Left sided:
  • Check if $W$ observed in sample is in the rejection region or
  • Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$

If $n$ is large, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
n.a.n.a.Wald-type approximate $C\%$ confidence interval for $\beta_k$
--$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
n.a.n.a.Goodness of fit measure $R^2_L$
--$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
n.a.Equivalent ton.a.
- Two sided sign test is equivalent to -
Example contextExample contextExample context
Is the median mental health score of office workers different from $m_0 = 50$?Do people tend to score higher on mental health after a mindfulness course?Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
SPSSSPSSSPSS
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:

Analyze > Nonparametric Tests > One Sample...
  • On the Objective tab, choose Customize Analysis
  • On the Fields tab, specify the variable for which you want to compute the Wilcoxon signed-rank test
  • On the Settings tab, choose Customize tests and check the box for 'Compare median to hypothesized (Wilcoxon signed-rank test)'. Fill in your $m_0$ in the box next to Hypothesized median
  • Click Run
  • Double click on the output table to see the full results
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Sign test
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
JamoviJamoviJamovi
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Tests, select Wilcoxon rank
  • Under Hypothesis, fill in the value for $m_0$ in the box next to Test Value, and select your alternative hypothesis
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the two paired variables in the box below Measures
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Practice questionsPractice questionsPractice questions