Goodness of fit test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Goodness of fit test | Kruskal-Wallis test | Cochran's Q test | Paired sample $t$ test | One sample $t$ test for the mean | Sign test | Logistic regression | Two sample $t$ test - equal variances assumed |
|
---|---|---|---|---|---|---|---|---|
Independent variable | Independent/grouping variable | Independent/grouping variable | Independent variable | Independent variable | Independent variable | Independent variables | Independent/grouping variable | |
None | One categorical with $I$ independent groups ($I \geqslant 2$) | One within subject factor ($\geq 2$ related groups) | 2 paired groups | None | 2 paired groups | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One categorical with 2 independent groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One categorical with $J$ independent groups ($J \geqslant 2$) | One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$ | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. |
| Model chi-squared test for the complete regression model:
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H1: not all population proportions are equal | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ |
| Model chi-squared test for the complete regression model:
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells. | $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$ | If a failure is scored as 0 and a success is scored as 1:
$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$ Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores. Before computing $Q$, first exclude blocks with equal scores in all $k$ groups. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $W = $ number of difference scores that is larger than 0 | Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | |
n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | Pooled standard deviation | |
- | - | - | - | - | - | - | $s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$ | |
Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $H$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W$ if H0 were true | Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | Sampling distribution of $t$ if H0 were true | |
Approximately the chi-squared distribution with $J - 1$ degrees of freedom | For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used. | If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| $t$ distribution with $n_1 + n_2 - 2$ degrees of freedom | |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
| For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
| If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| Two sided:
| Two sided:
| If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| Two sided:
| |
n.a. | n.a. | n.a. | $C\%$ confidence interval for $\mu$ | $C\%$ confidence interval for $\mu$ | n.a. | Wald-type approximate $C\%$ confidence interval for $\beta_k$ | $C\%$ confidence interval for $\mu_1 - \mu_2$ | |
- | - | - | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | $b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | $(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | |
n.a. | n.a. | n.a. | Effect size | Effect size | n.a. | Goodness of fit measure $R^2_L$ | Effect size | |
- | - | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | - | $R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | Cohen's $d$: Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other. | |
n.a. | n.a. | n.a. | Visual representation | Visual representation | n.a. | n.a. | Visual representation | |
- | - | - | ![]() | ![]() | - | - | ![]() | |
n.a. | n.a. | Equivalent to | Equivalent to | n.a. | Equivalent to | n.a. | Equivalent to | |
- | - | Friedman test, with a categorical dependent variable consisting of two independent groups. |
| - |
Two sided sign test is equivalent to
| - | One way ANOVA with an independent variable with 2 levels ($I$ = 2):
| |
Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | |
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$? | Do people from different religions tend to score differently on social economic status? | Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks? | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Is the average mental health score of office workers different from $\mu_0 = 50$? | Do people tend to score higher on mental health after a mindfulness course? | Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women. | |
SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Compare Means > One-Sample T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Regression > Binary Logistic...
| Analyze > Compare Means > Independent-Samples T Test...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
| ANOVA > One Way ANOVA - Kruskal-Wallis
| Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| T-Tests > Paired Samples T-Test
| T-Tests > One Sample T-Test
| Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| Regression > 2 Outcomes - Binomial
| T-Tests > Independent Samples T-Test
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |