Kruskal-Wallis test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Kruskal-Wallis test | Chi-squared test for the relationship between two categorical variables | One sample Wilcoxon signed-rank test |
You cannot compare more than 3 methods |
---|---|---|---|
Independent/grouping variable | Independent /column variable | Independent variable | |
One categorical with $I$ independent groups ($I \geqslant 2$) | One categorical with $I$ independent groups ($I \geqslant 2$) | None | |
Dependent variable | Dependent /row variable | Dependent variable | |
One of ordinal level | One categorical with $J$ independent groups ($J \geqslant 2$) | One of ordinal level | |
Null hypothesis | Null hypothesis | Null hypothesis | |
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H0: there is no association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| H1: there is an association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
| H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
$H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$ | $X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| |
Sampling distribution of $H$ if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | |
For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used. | Approximately the chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedom | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | |
Significant? | Significant? | Significant? | |
For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
|
| For large samples, the table for standard normal probabilities can be used: Two sided:
| |
Example context | Example context | Example context | |
Do people from different religions tend to score differently on social economic status? | Is there an association between economic class and gender? Is the distribution of economic class different between men and women? | Is the median mental health score of office workers different from $m_0 = 50$? | |
SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
| Analyze > Descriptive Statistics > Crosstabs...
| Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| |
Jamovi | Jamovi | Jamovi | |
ANOVA > One Way ANOVA - Kruskal-Wallis
| Frequencies > Independent Samples - $\chi^2$ test of association
| T-Tests > One Sample T-Test
| |
Practice questions | Practice questions | Practice questions | |