ANCOVA - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

ANCOVA
Kruskal-Wallis test
Two sample $t$ test - equal variances not assumed
You cannot compare more than 3 methods
Independent variablesIndependent/grouping variableIndependent/grouping variable
One or more categorical with independent groups, and one or more quantitative control variables of interval or ratio level (covariates)One categorical with $I$ independent groups ($I \geqslant 2$)One categorical with 2 independent groups
Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne of ordinal levelOne quantitative of interval or ratio level
THIS TABLE IS YET TO BE COMPLETEDNull hypothesisNull hypothesis
-If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
  • H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
  • H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
  • H0: P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
n.a.Alternative hypothesisAlternative hypothesis
-If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
  • H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
  • H1: the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
  • H1: for at least one pair of groups:
    P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
n.a.AssumptionsAssumptions
-
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
n.a.Test statisticTest statistic
-

$H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$

Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.

Note: if ties are present in the data, the formula for $H$ is more complicated.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
n.a.Sampling distribution of $H$ if H0 were trueSampling distribution of $t$ if H0 were true
-

For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.

For small samples, the exact distribution of $H$ should be used.

Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
n.a.Significant?Significant?
-For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided:
n.a.n.a.Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$
--$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
n.a.n.a.Visual representation
--
Two sample t test - equal variances not assumed
n.a.Example contextExample context
-Do people from different religions tend to score differently on social economic status? Is the average mental health score different between men and women?
n.a.SPSSSPSS
-Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
  • Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
  • Continue and click OK
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
n.a.JamoviJamovi
-ANOVA > One Way ANOVA - Kruskal-Wallis
  • Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questions