Sign test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Sign test | Pearson correlation | $z$ test for the difference between two proportions | Friedman test | Friedman test | Logistic regression |
|
---|---|---|---|---|---|---|
Independent variable | Variable 1 | Independent/grouping variable | Independent/grouping variable | Independent/grouping variable | Independent variables | |
2 paired groups | One quantitative of interval or ratio level | One categorical with 2 independent groups | One within subject factor ($\geq 2$ related groups) | One within subject factor ($\geq 2$ related groups) | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | |
Dependent variable | Variable 2 | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One of ordinal level | One quantitative of interval or ratio level | One categorical with 2 independent groups | One of ordinal level | One of ordinal level | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| H0: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | Model chi-squared test for the complete regression model:
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| H1 two sided: $\rho \neq \rho_0$ H1 right sided: $\rho > \rho_0$ H1 left sided: $\rho < \rho_0$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | Model chi-squared test for the complete regression model:
| |
Assumptions | Assumptions of test for correlation | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$W = $ number of difference scores that is larger than 0 | Test statistic for testing H0: $\rho = 0$:
| $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| |
Sampling distribution of $W$ if H0 were true | Sampling distribution of $t$ and of $z$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | |
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Sampling distribution of $t$:
| Approximately the standard normal distribution | If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| $t$ Test two sided:
| Two sided:
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| |
n.a. | Approximate $C$% confidence interval for $\rho$ | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | n.a. | n.a. | Wald-type approximate $C\%$ confidence interval for $\beta_k$ | |
- | First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
Then transform back to get the approximate $C$% confidence interval for $\rho$:
| Regular (large sample):
| - | - | $b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | |
n.a. | Properties of the Pearson correlation coefficient | n.a. | n.a. | n.a. | Goodness of fit measure $R^2_L$ | |
- |
| - | - | - | $R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | |
Equivalent to | Equivalent to | Equivalent to | n.a. | n.a. | n.a. | |
Two sided sign test is equivalent to
| OLS regression with one independent variable:
| When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. | - | - | - | |
Example context | Example context | Example context | Example context | Example context | Example context | |
Do people tend to score higher on mental health after a mindfulness course? | Is there a linear relationship between physical health and mental health? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | |
SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Correlate > Bivariate...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Regression > Binary Logistic...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| Regression > Correlation Matrix
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| ANOVA > Repeated Measures ANOVA - Friedman
| ANOVA > Repeated Measures ANOVA - Friedman
| Regression > 2 Outcomes - Binomial
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |