Goodness of fit test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Goodness of fit test | Kruskal-Wallis test | Cochran's Q test | Paired sample $t$ test | One sample $t$ test for the mean | Sign test | Marginal Homogeneity test / Stuart-Maxwell test | Wilcoxon signed-rank test |
|
---|---|---|---|---|---|---|---|---|
Independent variable | Independent/grouping variable | Independent/grouping variable | Independent variable | Independent variable | Independent variable | Independent variable | Independent variable | |
None | One categorical with $I$ independent groups ($I \geqslant 2$) | One within subject factor ($\geq 2$ related groups) | 2 paired groups | None | 2 paired groups | 2 paired groups | 2 paired groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One categorical with $J$ independent groups ($J \geqslant 2$) | One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One of ordinal level | One categorical with $J$ independent groups ($J \geqslant 2$) | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$ | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. |
| H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ | H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H1: not all population proportions are equal | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ |
| H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. | H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells. | $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$ | If a failure is scored as 0 and a success is scored as 1:
$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$ Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores. Before computing $Q$, first exclude blocks with equal scores in all $k$ groups. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $W = $ number of difference scores that is larger than 0 | Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| |
Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $H$ if H0 were true | Sampling distribution of $Q$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W$ if H0 were true | Sampling distribution of the test statistic if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | |
Approximately the chi-squared distribution with $J - 1$ degrees of freedom | For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used. | If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Approximately the chi-squared distribution with $J - 1$ degrees of freedom | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
| For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
| If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| Two sided:
| Two sided:
| If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| If we denote the test statistic as $X^2$:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| |
n.a. | n.a. | n.a. | $C\%$ confidence interval for $\mu$ | $C\%$ confidence interval for $\mu$ | n.a. | n.a. | n.a. | |
- | - | - | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | - | - | |
n.a. | n.a. | n.a. | Effect size | Effect size | n.a. | n.a. | n.a. | |
- | - | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | - | - | - | |
n.a. | n.a. | n.a. | Visual representation | Visual representation | n.a. | n.a. | n.a. | |
- | - | - | ![]() | ![]() | - | - | - | |
n.a. | n.a. | Equivalent to | Equivalent to | n.a. | Equivalent to | n.a. | n.a. | |
- | - | Friedman test, with a categorical dependent variable consisting of two independent groups. |
| - |
Two sided sign test is equivalent to
| - | - | |
Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | |
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$? | Do people from different religions tend to score differently on social economic status? | Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks? | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Is the average mental health score of office workers different from $\mu_0 = 50$? | Do people tend to score higher on mental health after a mindfulness course? | Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Is the median of the differences between the mental health scores before and after an intervention different from 0? | |
SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Compare Means > One-Sample T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | n.a. | Jamovi | |
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
| ANOVA > One Way ANOVA - Kruskal-Wallis
| Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| T-Tests > Paired Samples T-Test
| T-Tests > One Sample T-Test
| Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| - | T-Tests > Paired Samples T-Test
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |