Goodness of fit test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Goodness of fit test
One sample $z$ test for the mean
McNemar's test
Two sample $z$ test
Independent variableIndependent variableIndependent variableIndependent/grouping variable
NoneNone2 paired groupsOne categorical with 2 independent groups
Dependent variableDependent variableDependent variableDependent variable
One categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne categorical with 2 independent groupsOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
  • H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
  • H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$, the probability of drawing an observation from condition $J$ is $\pi_J$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.

Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:

  1. First score of pair is 0, second score of pair is 0
  2. First score of pair is 0, second score of pair is 1 (switched)
  3. First score of pair is 1, second score of pair is 0 (switched)
  4. First score of pair is 1, second score of pair is 1
The null hypothesis H0 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) = P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is the same as the probability that a pair of scores switches from 1 to 0.

Other formulations of the null hypothesis are:

  • H0: $\pi_1 = \pi_2$, where $\pi_1$ is the population proportion of ones for the first paired group and $\pi_2$ is the population proportion of ones for the second paired group
  • H0: for each pair of scores, P(first score of pair is 1) = P(second score of pair is 1)

H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
  • H1: the population proportions are not all as specified under the null hypothesis
or equivalently
  • H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$

The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0.

Other formulations of the alternative hypothesis are:

  • H1: $\pi_1 \neq \pi_2$
  • H1: for each pair of scores, P(first score of pair is 1) $\neq$ P(second score of pair is 1)

H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
AssumptionsAssumptionsAssumptionsAssumptions
  • Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Scores are normally distributed in the population
  • Population standard deviation $\sigma$ is known
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Population standard deviations $\sigma_1$ and $\sigma_2$ are known
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statistic
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
$X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0.
$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Sampling distribution of $X^2$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $z$ if H0 were true
Approximately the chi-squared distribution with $J - 1$ degrees of freedomStandard normal distribution

If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom.

If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$.

Standard normal distribution
Significant?Significant?Significant?Significant?
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: For test statistic $X^2$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If $b + c$ is small, the table for the binomial distribution should be used, with as test statistic $b$:
  • Check if $b$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $b$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided:
n.a.$C\%$ confidence interval for $\mu$n.a.$C\%$ confidence interval for $\mu_1 - \mu_2$
-$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
-$(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
n.a.Effect sizen.a.n.a.
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
--
n.a.Visual representationn.a.Visual representation
-
One sample z test
-
Two sample z test
n.a.n.a.Equivalent ton.a.
---
Example contextExample contextExample contextExample context
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women.
SPSSn.a.SPSSn.a.
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
  • Put your categorical variable in the box below Test Variable List
  • Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
-Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the McNemar test
-
Jamovin.a.Jamovin.a.
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
  • Put your categorical variable in the box below Variable
  • Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
-Frequencies > Paired Samples - McNemar test
  • Put one of the two paired variables in the box below Rows and the other paired variable in the box below Columns
-
Practice questionsPractice questionsPractice questionsPractice questions