# Goodness of fit test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Goodness of fit test
Kruskal-Wallis test
Cochran's Q test
One way ANOVA
One sample $z$ test for the mean
One way ANOVA
$z$ test for the difference between two proportions
Independent variableIndependent/grouping variableIndependent/grouping variableIndependent/grouping variableIndependent variableIndependent/grouping variableIndependent/grouping variable
NoneOne categorical with $I$ independent groups ($I \geqslant 2$)One within subject factor ($\geq 2$ related groups)One categorical with $I$ independent groups ($I \geqslant 2$)NoneOne categorical with $I$ independent groups ($I \geqslant 2$)One categorical with 2 independent groups
Dependent variableDependent variableDependent variableDependent variableDependent variableDependent variableDependent variable
One categorical with $J$ independent groups ($J \geqslant 2$)One of ordinal levelOne categorical with 2 independent groupsOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
• H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
• H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$, the probability of drawing an observation from condition $J$ is $\pi_J$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
• H0: P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$

Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
ANOVA $F$ test:
• H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
• H0: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
• H0: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
ANOVA $F$ test:
• H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
• H0: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
• H0: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
• H1: the population proportions are not all as specified under the null hypothesis
or equivalently
• H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H1: the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
• H1: for at least one pair of groups:
P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
H1: not all population proportions are equalANOVA $F$ test:
• H1: not all population means are equal
$t$ Test for contrast:
• H1 two sided: $\Psi \neq 0$
• H1 right sided: $\Psi > 0$
• H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
• H1 - usually two sided: $\mu_g \neq \mu_h$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
ANOVA $F$ test:
• H1: not all population means are equal
$t$ Test for contrast:
• H1 two sided: $\Psi \neq 0$
• H1 right sided: $\Psi > 0$
• H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
• H1 - usually two sided: $\mu_g \neq \mu_h$
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptions
• Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Scores are normally distributed in the population
• Population standard deviation $\sigma$ is known
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
• Significance test: number of successes and number of failures are each 5 or more in both sample groups
• Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
• Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statisticTest statisticTest statisticTest statistic
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.

$H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$

Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.

Note: if ties are present in the data, the formula for $H$ is more complicated.
If a failure is scored as 0 and a success is scored as 1:

$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$

Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.

Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
ANOVA $F$ test:
• \begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned}
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
• $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
• $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
ANOVA $F$ test:
• \begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned}
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
• $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
• $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
n.a.n.a.n.a.Pooled standard deviationn.a.Pooled standard deviationn.a.
---\begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}

Here $s^2_i$ is the variance in group $i.$
-\begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}

Here $s^2_i$ is the variance in group $i.$
-
Sampling distribution of $X^2$ if H0 were trueSampling distribution of $H$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $z$ if H0 were true
Approximately the chi-squared distribution with $J - 1$ degrees of freedom

For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.

For small samples, the exact distribution of $H$ should be used.

If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedomSampling distribution of $F$:
• $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - I$ degrees of freedom
Standard normal distributionSampling distribution of $F$:
• $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - I$ degrees of freedom
Approximately the standard normal distribution
Significant?Significant?Significant?Significant?Significant?Significant?Significant?
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)

$t$ Test for contrast two sided:
$t$ Test for contrast right sided:
$t$ Test for contrast left sided:

$t$ Test multiple comparisons two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
• Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
• Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
Two sided:
Right sided:
Left sided:
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)

$t$ Test for contrast two sided:
$t$ Test for contrast right sided:
$t$ Test for contrast left sided:

$t$ Test multiple comparisons two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
• Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
• Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
Two sided:
Right sided:
Left sided:
n.a.n.a.n.a.$C\%$ confidence interval for $\Psi$, for $\mu_g - \mu_h$, and for $\mu_i$$C\% confidence interval for \mu$$C\%$ confidence interval for $\Psi$, for $\mu_g - \mu_h$, and for $\mu_i$Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$
---Confidence interval for $\Psi$ (contrast):
• $c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
where the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g - \mu_h$ (multiple comparisons):
• $(\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
where $t^{**}$ depends upon $C$, degrees of freedom ($N - I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* =$ the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
• $\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
where $\bar{y}_i$ is the sample mean in group $i$, $n_i$ is the sample size of group $i$, and the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
Confidence interval for $\Psi$ (contrast):
• $c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
where the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g - \mu_h$ (multiple comparisons):
• $(\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
where $t^{**}$ depends upon $C$, degrees of freedom ($N - I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* =$ the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
• $\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
where $\bar{y}_i$ is the sample mean in group $i$, $n_i$ is the sample size of group $i$, and the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Regular (large sample):
• $(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
• $(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
n.a.n.a.n.a.Effect sizeEffect sizeEffect sizen.a.
---
• Proportion variance explained $\eta^2$ and $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variable: \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} Only in one way ANOVA $\eta^2 = R^2.$ $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

• Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to: $$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2.$

• Cohen's $d$:
Standardized difference between the mean in group $g$ and in group $h$: $$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ two sample means are removed from each other.
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
• Proportion variance explained $\eta^2$ and $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variable: \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} Only in one way ANOVA $\eta^2 = R^2.$ $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

• Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to: $$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2.$

• Cohen's $d$:
Standardized difference between the mean in group $g$ and in group $h$: $$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ two sample means are removed from each other.
-
n.a.n.a.n.a.n.a.Visual representationn.a.n.a.
------
n.a.n.a.n.a.ANOVA tablen.a.ANOVA tablen.a.
---
Click the link for a step by step explanation of how to compute the sum of squares.
-
Click the link for a step by step explanation of how to compute the sum of squares.
-
n.a.n.a.Equivalent toEquivalent ton.a.Equivalent toEquivalent to
--Friedman test, with a categorical dependent variable consisting of two independent groups.OLS regression with one categorical independent variable transformed into $I - 1$ code variables:
• $F$ test ANOVA is equivalent to $F$ test regression model
• $t$ test for contrast $i$ is equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
-OLS regression with one categorical independent variable transformed into $I - 1$ code variables:
• $F$ test ANOVA is equivalent to $F$ test regression model
• $t$ test for contrast $i$ is equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels.
Example contextExample contextExample contextExample contextExample contextExample contextExample context
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?Do people from different religions tend to score differently on social economic status? Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?Is the average mental health score different between people from a low, moderate, and high economic class?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Is the average mental health score different between people from a low, moderate, and high economic class?Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.
SPSSSPSSSPSSSPSSn.a.SPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
• Put your categorical variable in the box below Test Variable List
• Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
• Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
• Continue and click OK
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
• Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
• Under Test Type, select Cochran's Q test
Analyze > Compare Means > One-Way ANOVA...
• Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
-Analyze > Compare Means > One-Way ANOVA...
• Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Analyze > Descriptive Statistics > Crosstabs...
• Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
• Click the Statistics... button, and click on the square in front of Chi-square
• Continue and click OK
JamoviJamoviJamoviJamovin.a.JamoviJamovi
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
• Put your categorical variable in the box below Variable
• Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
ANOVA > One Way ANOVA - Kruskal-Wallis
• Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:

ANOVA > Repeated Measures ANOVA - Friedman
• Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
ANOVA > ANOVA
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors
-ANOVA > ANOVA
• Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Frequencies > Independent Samples - $\chi^2$ test of association
• Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
Practice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questions