Cochran's Q test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Cochran's Q test
Paired sample $t$ test
$z$ test for the difference between two proportions
You cannot compare more than 3 methods
Independent/grouping variableIndependent variableIndependent/grouping variable
One within subject factor ($\geq 2$ related groups)2 paired groupsOne categorical with 2 independent groups
Dependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesis
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$

Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair.
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
H1: not all population proportions are equalH1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
AssumptionsAssumptionsAssumptions
  • Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
  • Difference scores are normally distributed in the population
  • Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
  • Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
    • Significance test: number of successes and number of failures are each 5 or more in both sample groups
    • Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
    • Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statistic
If a failure is scored as 0 and a success is scored as 1:

$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$

Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.

Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores).

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
Sampling distribution of $Q$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were true
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom$t$ distribution with $N - 1$ degrees of freedomApproximately the standard normal distribution
Significant?Significant?Significant?
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided:
n.a.$C\%$ confidence interval for $\mu$Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$
-$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu$ can also be used as significance test.
Regular (large sample):
  • $(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
    where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
  • $(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
    where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
n.a.Effect sizen.a.
-Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$
-
n.a.Visual representationn.a.
-
Paired sample t test
-
Equivalent toEquivalent toEquivalent to
Friedman test, with a categorical dependent variable consisting of two independent groups.
  • One sample $t$ test on the difference scores.
  • Repeated measures ANOVA with one dichotomous within subjects factor.
When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels.
Example contextExample contextExample context
Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$?Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.
SPSSSPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
  • Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
  • Under Test Type, select Cochran's Q test
Analyze > Compare Means > Paired-Samples T Test...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Analyze > Descriptive Statistics > Crosstabs...
  • Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
  • Click the Statistics... button, and click on the square in front of Chi-square
  • Continue and click OK
JamoviJamoviJamovi
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
T-Tests > Paired Samples T-Test
  • Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
  • Under Hypothesis, select your alternative hypothesis
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Frequencies > Independent Samples - $\chi^2$ test of association
  • Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
Practice questionsPractice questionsPractice questions