Cochran's Q test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Cochran's Q test
Friedman test
$z$ test for a single proportion
Wilcoxon signed-rank test
Independent/grouping variableIndependent/grouping variableIndependent variableIndependent variable
One within subject factor ($\geq 2$ related groups)One within subject factor ($\geq 2$ related groups)None2 paired groups
Dependent variableDependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne of ordinal levelOne categorical with 2 independent groupsOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$

Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups

Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\pi = \pi_0$

Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
H0: $m = 0$

Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair.

Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1: not all population proportions are equalH1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
H1 two sided: $m \neq 0$
H1 right sided: $m > 0$
H1 left sided: $m < 0$
AssumptionsAssumptionsAssumptionsAssumptions
  • Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
  • Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
  • Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
    • Significance test: $N \times \pi_0$ and $N \times (1 - \pi_0)$ are each larger than 10
    • Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures in sample are each 15 or more
    • Plus four 90%, 95%, or 99% confidence interval: total sample size is 10 or more
  • Sample is a simple random sample from the population. That is, observations are independent of one another
If the sample size is too small for $z$ to be approximately normally distributed, the binomial test for a single proportion should be used.
  • The population distribution of the difference scores is symmetric
  • Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Note: sometimes it considered sufficient for the data to be measured on an ordinal scale, rather than an interval or ratio scale. However, since the test statistic is based on ranked difference scores, we need to know whether a change in scores from, say, 6 to 7 is larger than/smaller than/equal to a change from 5 to 6. This is impossible to know for ordinal scales, since for these scales the size of the difference between values is meaningless.
Test statisticTest statisticTest statisticTest statistic
If a failure is scored as 0 and a success is scored as 1:

$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$

Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.

Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$

Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.

Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.

Note: if ties are present in the data, the formula for $Q$ is more complicated.
$z = \dfrac{p - \pi_0}{\sqrt{\dfrac{\pi_0(1 - \pi_0)}{N}}}$
Here $p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size, and $\pi_0$ is the population proportion of successes according to the null hypothesis.
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
  1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score}_2 - \mbox{score}_1)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
  2. For each subject, compute the absolute value of the difference score $|\mbox{score}_2 - \mbox{score}_1|$.
  3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
  4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

  • $W_1 = \sum\, R_d^{+}$
    or
    $W_1 = \sum\, R_d^{-}$
    That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
    • tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
    • If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
  • $W_2 = \sum\, \mbox{sign}_d \times R_d$
    That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
Sampling distribution of $Q$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were true
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedomIf the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.

For small samples, the exact distribution of $Q$ should be used.
Approximately the standard normal distributionSampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.
Significant?Significant?Significant?Significant?
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
n.a.n.a.Approximate $C\%$ confidence interval for $\pi$n.a.
--Regular (large sample):
  • $p \pm z^* \times \sqrt{\dfrac{p(1 - p)}{N}}$
    where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
  • $p_{plus} \pm z^* \times \sqrt{\dfrac{p_{plus}(1 - p_{plus})}{N + 4}}$
    where $p_{plus} = \dfrac{X + 2}{N + 4}$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
-
Equivalent ton.a.Equivalent ton.a.
Friedman test, with a categorical dependent variable consisting of two independent groups.-
  • When testing two sided: goodness of fit test, with a categorical variable with 2 levels.
  • When $N$ is large, the $p$ value from the $z$ test for a single proportion approaches the $p$ value from the binomial test for a single proportion. The $z$ test for a single proportion is just a large sample approximation of the binomial test for a single proportion.
-
Example contextExample contextExample contextExample context
Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? Use the normal approximation for the sampling distribution of the test statistic.Is the median of the differences between the mental health scores before and after an intervention different from 0?
SPSSSPSSSPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
  • Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
  • Under Test Type, select Cochran's Q test
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
  • Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
  • Under Test Type, select the Friedman test
Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
If computation time allows, SPSS will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Wilcoxon test
JamoviJamoviJamoviJamovi
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
ANOVA > Repeated Measures ANOVA - Friedman
  • Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
Jamovi will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
T-Tests > Paired Samples T-Test
  • Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
  • Under Tests, select Wilcoxon rank
  • Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questions