Goodness of fit test  overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
Goodness of fit test  KruskalWallis test  Cochran's Q test  Paired sample $t$ test  One sample $t$ test for the mean  Sign test  Binomial test for a single proportion  Two sample $t$ test  equal variances not assumed 


Independent variable  Independent/grouping variable  Independent/grouping variable  Independent variable  Independent variable  Independent variable  Independent variable  Independent/grouping variable  
None  One categorical with $I$ independent groups ($I \geqslant 2$)  One within subject factor ($\geq 2$ related groups)  2 paired groups  None  2 paired groups  None  One categorical with 2 independent groups  
Dependent variable  Dependent variable  Dependent variable  Dependent variable  Dependent variable  Dependent variable  Dependent variable  Dependent variable  
One categorical with $J$ independent groups ($J \geqslant 2$)  One of ordinal level  One categorical with 2 independent groups  One quantitative of interval or ratio level  One quantitative of interval or ratio level  One of ordinal level  One categorical with 2 independent groups  One quantitative of interval or ratio level  
Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  Null hypothesis  
 If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
 H_{0}: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$  H_{0}: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair.  H_{0}: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. 
 H_{0}: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.  H_{0}: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.  
Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  
 If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
 H_{1}: not all population proportions are equal  H_{1} two sided: $\mu \neq \mu_0$ H_{1} right sided: $\mu > \mu_0$ H_{1} left sided: $\mu < \mu_0$  H_{1} two sided: $\mu \neq \mu_0$ H_{1} right sided: $\mu > \mu_0$ H_{1} left sided: $\mu < \mu_0$ 
 H_{1} two sided: $\pi \neq \pi_0$ H_{1} right sided: $\pi > \pi_0$ H_{1} left sided: $\pi < \pi_0$  H_{1} two sided: $\mu_1 \neq \mu_2$ H_{1} right sided: $\mu_1 > \mu_2$ H_{1} left sided: $\mu_1 < \mu_2$  
Assumptions  Assumptions  Assumptions  Assumptions  Assumptions  Assumptions  Assumptions  Assumptions  







 
Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  Test statistic  
$X^2 = \sum{\frac{(\mbox{observed cell count}  \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.  $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i}  3(N + 1)$  If a failure is scored as 0 and a success is scored as 1:
$Q = k(k  1) \dfrac{\sum_{groups} \Big (\mbox{group total}  \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k  \mbox{block total})}$ Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores. Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.  $t = \dfrac{\bar{y}  \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.  $t = \dfrac{\bar{y}  \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.  $W = $ number of difference scores that is larger than 0  $X$ = number of successes in the sample  $t = \dfrac{(\bar{y}_1  \bar{y}_2)  0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1  \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1  \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1  \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2  \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.  
Sampling distribution of $X^2$ if H_{0} were true  Sampling distribution of $H$ if H_{0} were true  Sampling distribution of $Q$ if H_{0} were true  Sampling distribution of $t$ if H_{0} were true  Sampling distribution of $t$ if H_{0} were true  Sampling distribution of $W$ if H_{0} were true  Sampling distribution of $X$ if H0 were true  Sampling distribution of $t$ if H_{0} were true  
Approximately the chisquared distribution with $J  1$ degrees of freedom  For large samples, approximately the chisquared distribution with $I  1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used.  If the number of blocks (usually the number of subjects) is large, approximately the chisquared distribution with $k  1$ degrees of freedom  $t$ distribution with $N  1$ degrees of freedom  $t$ distribution with $N  1$ degrees of freedom  The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1P)} = \sqrt{n \times 0.5(1  0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W  n \times 0.5}{\sqrt{n \times 0.5(1  0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.  Binomial($n$, $P$) distribution.
Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis).  Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to $k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1  1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2  1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$ or $k$ = the smaller of $n_1$  1 and $n_2$  1 First definition of $k$ is used by computer programs, second definition is often used for hand calculations.  
Significant?  Significant?  Significant?  Significant?  Significant?  Significant?  Significant?  Significant?  
 For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
 If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
 Two sided:
 Two sided:
 If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
 Two sided:
 Two sided:
 
n.a.  n.a.  n.a.  $C\%$ confidence interval for $\mu$  $C\%$ confidence interval for $\mu$  n.a.  n.a.  Approximate $C\%$ confidence interval for $\mu_1  \mu_2$  
      $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test.  $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test.      $(\bar{y}_1  \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu_1  \mu_2$ can also be used as significance test.  
n.a.  n.a.  n.a.  Effect size  Effect size  n.a.  n.a.  n.a.  
      Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y}  \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$  Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y}  \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$        
n.a.  n.a.  n.a.  Visual representation  Visual representation  n.a.  n.a.  Visual representation  
          
n.a.  n.a.  Equivalent to  Equivalent to  n.a.  Equivalent to  n.a.  n.a.  
    Friedman test, with a categorical dependent variable consisting of two independent groups. 
  
Two sided sign test is equivalent to
     
Example context  Example context  Example context  Example context  Example context  Example context  Example context  Example context  
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?  Do people from different religions tend to score differently on social economic status?  Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?  Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$?  Is the average mental health score of office workers different from $\mu_0 = 50$?  Do people tend to score higher on mental health after a mindfulness course?  Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$?  Is the average mental health score different between men and women?  
SPSS  SPSS  SPSS  SPSS  SPSS  SPSS  SPSS  SPSS  
Analyze > Nonparametric Tests > Legacy Dialogs > Chisquare...
 Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
 Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
 Analyze > Compare Means > PairedSamples T Test...
 Analyze > Compare Means > OneSample T Test...
 Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
 Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
 Analyze > Compare Means > IndependentSamples T Test...
 
Jamovi  Jamovi  Jamovi  Jamovi  Jamovi  Jamovi  Jamovi  Jamovi  
Frequencies > N Outcomes  $\chi^2$ Goodness of fit
 ANOVA > One Way ANOVA  KruskalWallis
 Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA  Friedman
 TTests > Paired Samples TTest
 TTests > One Sample TTest
 Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA  Friedman
 Frequencies > 2 Outcomes  Binomial test
 TTests > Independent Samples TTest
 
Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  Practice questions  