Cochran's Q test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Cochran's Q test
One sample $z$ test for the mean
$z$ test for the difference between two proportions
One sample $t$ test for the mean
Two way ANOVA
Independent/grouping variableIndependent variableIndependent/grouping variableIndependent variableIndependent/grouping variables
One within subject factor ($\geq 2$ related groups)NoneOne categorical with 2 independent groupsNoneTwo categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
Dependent variableDependent variableDependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne quantitative of interval or ratio levelOne categorical with 2 independent groupsOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$

Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
ANOVA $F$ tests:
  • H0 for main and interaction effects together (model): no main effects and interaction effect
  • H0 for independent variable A: no main effect for A
  • H0 for independent variable B: no main effect for B
  • H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1: not all population proportions are equalH1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
ANOVA $F$ tests:
  • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
  • H1 for independent variable A: there is a main effect for A
  • H1 for independent variable B: there is a main effect for B
  • H1 for the interaction term: there is an interaction effect between A and B
AssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
  • Scores are normally distributed in the population
  • Population standard deviation $\sigma$ is known
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
    • Significance test: number of successes and number of failures are each 5 or more in both sample groups
    • Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
    • Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Scores are normally distributed in the population
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
  • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
  • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Test statisticTest statisticTest statisticTest statisticTest statistic
If a failure is scored as 0 and a success is scored as 1:

$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$

Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.

Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
For main and interaction effects together (model):
  • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
  • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
  • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
  • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.
n.a.n.a.n.a.n.a.Pooled standard deviation
----$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $
Sampling distribution of $Q$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $F$ if H0 were true
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedomStandard normal distributionApproximately the standard normal distribution$t$ distribution with $N - 1$ degrees of freedomFor main and interaction effects together (model):
  • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
  • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
  • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
  • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
Significant?Significant?Significant?Significant?Significant?
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: Two sided: Right sided: Left sided:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
n.a.$C\%$ confidence interval for $\mu$Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$$C\%$ confidence interval for $\mu$n.a.
-$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
Regular (large sample):
  • $(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
    where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
  • $(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
    where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu$ can also be used as significance test.
-
n.a.Effect sizen.a.Effect sizeEffect size
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
    $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\eta^2$:
    Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
    $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to:
    $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

  • Proportion variance explained $\eta^2_{partial}$: $$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
n.a.Visual representationn.a.Visual representationn.a.
-
One sample z test
-
One sample t test
-
n.a.n.a.n.a.n.a.ANOVA table
----
two way ANOVA table
Equivalent ton.a.Equivalent ton.a.Equivalent to
Friedman test, with a categorical dependent variable consisting of two independent groups.-When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels.-OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.
Example contextExample contextExample contextExample contextExample context
Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.Is the average mental health score of office workers different from $\mu_0 = 50$?Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
SPSSn.a.SPSSSPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
  • Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
  • Under Test Type, select Cochran's Q test
-SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Analyze > Descriptive Statistics > Crosstabs...
  • Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
  • Click the Statistics... button, and click on the square in front of Chi-square
  • Continue and click OK
Analyze > Compare Means > One-Sample T Test...
  • Put your variable in the box below Test Variable(s)
  • Fill in the value for $\mu_0$ in the box next to Test Value
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Jamovin.a.JamoviJamoviJamovi
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
  • Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
-Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Frequencies > Independent Samples - $\chi^2$ test of association
  • Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
Practice questionsPractice questionsPractice questionsPractice questionsPractice questions