Goodness of fit test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Goodness of fit test | Two sample $t$ test - equal variances assumed | One sample Wilcoxon signed-rank test | $z$ test for the difference between two proportions | Pearson correlation |
|
---|---|---|---|---|---|
Independent variable | Independent/grouping variable | Independent variable | Independent/grouping variable | Variable 1 | |
None | One categorical with 2 independent groups | None | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Variable 2 | |
One categorical with $J$ independent groups ($J \geqslant 2$) | One quantitative of interval or ratio level | One of ordinal level | One categorical with 2 independent groups | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1 two sided: $\rho \neq \rho_0$ H1 right sided: $\rho > \rho_0$ H1 left sided: $\rho < \rho_0$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions of test for correlation | |
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells. | $t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | Test statistic for testing H0: $\rho = 0$:
| |
n.a. | Pooled standard deviation | n.a. | n.a. | n.a. | |
- | $s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$ | - | - | - | |
Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ and of $z$ if H0 were true | |
Approximately the chi-squared distribution with $J - 1$ degrees of freedom | $t$ distribution with $n_1 + n_2 - 2$ degrees of freedom | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Approximately the standard normal distribution | Sampling distribution of $t$:
| |
Significant? | Significant? | Significant? | Significant? | Significant? | |
| Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| $t$ Test two sided:
| |
n.a. | $C\%$ confidence interval for $\mu_1 - \mu_2$ | n.a. | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | Approximate $C$% confidence interval for $\rho$ | |
- | $(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | - | Regular (large sample):
| First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
Then transform back to get the approximate $C$% confidence interval for $\rho$:
| |
n.a. | Effect size | n.a. | n.a. | Properties of the Pearson correlation coefficient | |
- | Cohen's $d$: Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other. | - | - |
| |
n.a. | Visual representation | n.a. | n.a. | n.a. | |
- | - | - | - | ||
n.a. | Equivalent to | n.a. | Equivalent to | Equivalent to | |
- | One way ANOVA with an independent variable with 2 levels ($I$ = 2):
| - | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. | OLS regression with one independent variable:
| |
Example context | Example context | Example context | Example context | Example context | |
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$? | Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women. | Is the median mental health score of office workers different from $m_0 = 50$? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is there a linear relationship between physical health and mental health? | |
SPSS | SPSS | SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
| Analyze > Compare Means > Independent-Samples T Test...
| Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Correlate > Bivariate...
| |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | |
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
| T-Tests > Independent Samples T-Test
| T-Tests > One Sample T-Test
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| Regression > Correlation Matrix
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |