Goodness of fit test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Goodness of fit test | Paired sample $t$ test | Kruskal-Wallis test | Wilcoxon signed-rank test | One sample Wilcoxon signed-rank test |
|
---|---|---|---|---|---|
Independent variable | Independent variable | Independent/grouping variable | Independent variable | Independent variable | |
None | 2 paired groups | One categorical with $I$ independent groups ($I \geqslant 2$) | 2 paired groups | None | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One categorical with $J$ independent groups ($J \geqslant 2$) | One quantitative of interval or ratio level | One of ordinal level | One quantitative of interval or ratio level | One of ordinal level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$ | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| |
Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $H$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | |
Approximately the chi-squared distribution with $J - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used. | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | |
Significant? | Significant? | Significant? | Significant? | Significant? | |
| Two sided:
| For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| |
n.a. | $C\%$ confidence interval for $\mu$ | n.a. | n.a. | n.a. | |
- | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | - | - | |
n.a. | Effect size | n.a. | n.a. | n.a. | |
- | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - | - | - | |
n.a. | Visual representation | n.a. | n.a. | n.a. | |
- | - | - | - | ||
n.a. | Equivalent to | n.a. | n.a. | n.a. | |
- |
| - | - | - | |
Example context | Example context | Example context | Example context | Example context | |
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$? | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Do people from different religions tend to score differently on social economic status? | Is the median of the differences between the mental health scores before and after an intervention different from 0? | Is the median mental health score of office workers different from $m_0 = 50$? | |
SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
| T-Tests > Paired Samples T-Test
| ANOVA > One Way ANOVA - Kruskal-Wallis
| T-Tests > Paired Samples T-Test
| T-Tests > One Sample T-Test
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |