Goodness of fit test - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Goodness of fit test
One sample $z$ test for the mean
$z$ test for a single proportion
One way ANOVA
Independent variableIndependent variableIndependent variableIndependent/grouping variable
NoneNoneNoneOne categorical with $I$ independent groups ($I \geqslant 2$)
Dependent variableDependent variableDependent variableDependent variable
One categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne categorical with 2 independent groupsOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
  • H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
  • H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$, the probability of drawing an observation from condition $J$ is $\pi_J$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: $\pi = \pi_0$

Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
ANOVA $F$ test:
  • H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
    $\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
  • H0: $\Psi = 0$
    $\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
  • H0: $\mu_g = \mu_h$
    $\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
  • H1: the population proportions are not all as specified under the null hypothesis
or equivalently
  • H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
ANOVA $F$ test:
  • H1: not all population means are equal
$t$ Test for contrast:
  • H1 two sided: $\Psi \neq 0$
  • H1 right sided: $\Psi > 0$
  • H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
  • H1 - usually two sided: $\mu_g \neq \mu_h$
AssumptionsAssumptionsAssumptionsAssumptions
  • Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Scores are normally distributed in the population
  • Population standard deviation $\sigma$ is known
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
    • Significance test: $N \times \pi_0$ and $N \times (1 - \pi_0)$ are each larger than 10
    • Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures in sample are each 15 or more
    • Plus four 90%, 95%, or 99% confidence interval: total sample size is 10 or more
  • Sample is a simple random sample from the population. That is, observations are independent of one another
If the sample size is too small for $z$ to be approximately normally distributed, the binomial test for a single proportion should be used.
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statistic
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
$z = \dfrac{p - \pi_0}{\sqrt{\dfrac{\pi_0(1 - \pi_0)}{N}}}$
Here $p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size, and $\pi_0$ is the population proportion of successes according to the null hypothesis.
ANOVA $F$ test:
  • $\begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned} $
    where $N$ is the total sample size, and $I$ is the number of groups.
    Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
  • $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
    Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
    Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
  • $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
    $\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
    Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
n.a.n.a.n.a.Pooled standard deviation
---$ \begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $

Here $s^2_i$ is the variance in group $i.$
Sampling distribution of $X^2$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were true
Approximately the chi-squared distribution with $J - 1$ degrees of freedomStandard normal distributionApproximately the standard normal distributionSampling distribution of $F$:
  • $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
  • $t$ distribution with $N - I$ degrees of freedom
Significant?Significant?Significant?Significant?
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: $F$ test:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)

$t$ Test for contrast two sided: $t$ Test for contrast right sided: $t$ Test for contrast left sided:
$t$ Test multiple comparisons two sided:
  • Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
  • Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
  • Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
n.a.$C\%$ confidence interval for $\mu$Approximate $C\%$ confidence interval for $\pi$$C\%$ confidence interval for $\Psi$, for $\mu_g - \mu_h$, and for $\mu_i$
-$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
Regular (large sample):
  • $p \pm z^* \times \sqrt{\dfrac{p(1 - p)}{N}}$
    where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
  • $p_{plus} \pm z^* \times \sqrt{\dfrac{p_{plus}(1 - p_{plus})}{N + 4}}$
    where $p_{plus} = \dfrac{X + 2}{N + 4}$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
Confidence interval for $\Psi$ (contrast):
  • $c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
    where the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g - \mu_h$ (multiple comparisons):
  • $(\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
    where $t^{**}$ depends upon $C$, degrees of freedom ($N - I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* = $ the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
  • $\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
    where $\bar{y}_i$ is the sample mean in group $i$, $n_i$ is the sample size of group $i$, and the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
n.a.Effect sizen.a.Effect size
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
-
  • Proportion variance explained $\eta^2$ and $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variable: $$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$ Only in one way ANOVA $\eta^2 = R^2.$ $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to: $$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2.$

  • Cohen's $d$:
    Standardized difference between the mean in group $g$ and in group $h$: $$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ two sample means are removed from each other.
n.a.Visual representationn.a.n.a.
-
One sample z test
--
n.a.n.a.n.a.ANOVA table
---
ANOVA table

Click the link for a step by step explanation of how to compute the sum of squares.
n.a.n.a.Equivalent toEquivalent to
--
  • When testing two sided: goodness of fit test, with a categorical variable with 2 levels.
  • When $N$ is large, the $p$ value from the $z$ test for a single proportion approaches the $p$ value from the binomial test for a single proportion. The $z$ test for a single proportion is just a large sample approximation of the binomial test for a single proportion.
OLS regression with one categorical independent variable transformed into $I - 1$ code variables:
  • $F$ test ANOVA is equivalent to $F$ test regression model
  • $t$ test for contrast $i$ is equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
Example contextExample contextExample contextExample context
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? Use the normal approximation for the sampling distribution of the test statistic.Is the average mental health score different between people from a low, moderate, and high economic class?
SPSSn.a.SPSSSPSS
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
  • Put your categorical variable in the box below Test Variable List
  • Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
-Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
  • Put your dichotomous variable in the box below Test Variable List
  • Fill in the value for $\pi_0$ in the box next to Test Proportion
If computation time allows, SPSS will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
Analyze > Compare Means > One-Way ANOVA...
  • Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
Jamovin.a.JamoviJamovi
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
  • Put your categorical variable in the box below Variable
  • Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
-Frequencies > 2 Outcomes - Binomial test
  • Put your dichotomous variable in the white box at the right
  • Fill in the value for $\pi_0$ in the box next to Test value
  • Under Hypothesis, select your alternative hypothesis
Jamovi will give you the exact $p$ value based on the binomial distribution, rather than the approximate $p$ value based on the normal distribution
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors
Practice questionsPractice questionsPractice questionsPractice questions