Sign test  overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
Sign test  Pearson correlation  $z$ test for the difference between two proportions  Goodness of fit test  Two sample $z$ test  Friedman test 


Independent variable  Variable 1  Independent/grouping variable  Independent variable  Independent/grouping variable  Independent/grouping variable  
2 paired groups  One quantitative of interval or ratio level  One categorical with 2 independent groups  None  One categorical with 2 independent groups  One within subject factor ($\geq 2$ related groups)  
Dependent variable  Variable 2  Dependent variable  Dependent variable  Dependent variable  Dependent variable  
One of ordinal level  One quantitative of interval or ratio level  One categorical with 2 independent groups  One categorical with $J$ independent groups ($J \geqslant 2$)  One quantitative of interval or ratio level  One of ordinal level  
Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  
 H_{0}: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.  H_{0}: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. 
 H_{0}: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.  H_{0}: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.  
Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  
 H_{1} two sided: $\rho \neq \rho_0$ H_{1} right sided: $\rho > \rho_0$ H_{1} left sided: $\rho < \rho_0$  H_{1} two sided: $\pi_1 \neq \pi_2$ H_{1} right sided: $\pi_1 > \pi_2$ H_{1} left sided: $\pi_1 < \pi_2$ 
 H_{1} two sided: $\mu_1 \neq \mu_2$ H_{1} right sided: $\mu_1 > \mu_2$ H_{1} left sided: $\mu_1 < \mu_2$  H_{1}: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups  
Assumptions  Assumptions of test for correlation  Assumptions  Assumptions  Assumptions  Assumptions  





 
Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  
$W = $ number of difference scores that is larger than 0  Test statistic for testing H0: $\rho = 0$:
 $z = \dfrac{p_1  p_2}{\sqrt{p(1  p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2  p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$  $X^2 = \sum{\frac{(\mbox{observed cell count}  \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.  $z = \dfrac{(\bar{y}_1  \bar{y}_2)  0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1  \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1  \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1  \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2  \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.  $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i  3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects  so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated.  
Sampling distribution of $W$ if H_{0} were true  Sampling distribution of $t$ and of $z$ if H_{0} were true  Sampling distribution of $z$ if H_{0} were true  Sampling distribution of $X^2$ if H_{0} were true  Sampling distribution of $z$ if H_{0} were true  Sampling distribution of $Q$ if H_{0} were true  
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1P)} = \sqrt{n \times 0.5(1  0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W  n \times 0.5}{\sqrt{n \times 0.5(1  0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.  Sampling distribution of $t$:
 Approximately the standard normal distribution  Approximately the chisquared distribution with $J  1$ degrees of freedom  Standard normal distribution  If the number of blocks $N$ is large, approximately the chisquared distribution with $k  1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used.  
Significant?  Significant?  Significant?  Significant?  Significant?  Significant?  
If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
 $t$ Test two sided:
 Two sided:

 Two sided:
 If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
 
n.a.  Approximate $C$% confidence interval for $\rho$  Approximate $C\%$ confidence interval for $\pi_1  \pi_2$  n.a.  $C\%$ confidence interval for $\mu_1  \mu_2$  n.a.  
  First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
Then transform back to get the approximate $C$% confidence interval for $\rho$:
 Regular (large sample):
   $(\bar{y}_1  \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu_1  \mu_2$ can also be used as significance test.    
n.a.  Properties of the Pearson correlation coefficient  n.a.  n.a.  n.a.  n.a.  
 
         
n.a.  n.a.  n.a.  n.a.  Visual representation  n.a.  
          
Equivalent to  Equivalent to  Equivalent to  n.a.  n.a.  n.a.  
Two sided sign test is equivalent to
 OLS regression with one independent variable:
 When testing two sided: chisquared test for the relationship between two categorical variables, where both categorical variables have 2 levels.        
Example context  Example context  Example context  Example context  Example context  Example context  
Do people tend to score higher on mental health after a mindfulness course?  Is there a linear relationship between physical health and mental health?  Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.  Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?  Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women.  Is there a difference in depression level between measurement point 1 (preintervention), measurement point 2 (1 week postintervention), and measurement point 3 (6 weeks postintervention)?  
SPSS  SPSS  SPSS  SPSS  n.a.  SPSS  
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
 Analyze > Correlate > Bivariate...
 SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chisquared test instead. The $p$ value resulting from this chisquared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
 Analyze > Nonparametric Tests > Legacy Dialogs > Chisquare...
   Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
 
Jamovi  Jamovi  Jamovi  Jamovi  n.a.  Jamovi  
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA  Friedman
 Regression > Correlation Matrix
 Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chisquared test instead. The $p$ value resulting from this chisquared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples  $\chi^2$ test of association
 Frequencies > N Outcomes  $\chi^2$ Goodness of fit
   ANOVA > Repeated Measures ANOVA  Friedman
 
Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  