Chisquared test for the relationship between two categorical variables  overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
Chisquared test for the relationship between two categorical variables  One sample Wilcoxon signedrank test  $z$ test for a single proportion 
You cannot compare more than 3 methods 

Independent /column variable  Independent variable  Independent variable  
One categorical with $I$ independent groups ($I \geqslant 2$)  None  None  
Dependent /row variable  Dependent variable  Dependent variable  
One categorical with $J$ independent groups ($J \geqslant 2$)  One of ordinal level  One categorical with 2 independent groups  
Null hypothesis  Null hypothesis  Null hypothesis  
H_{0}: there is no association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
 H_{0}: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis.  H_{0}: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.  
Alternative hypothesis  Alternative hypothesis  Alternative hypothesis  
H_{1}: there is an association between the row and column variable More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
 H_{1} two sided: $m \neq m_0$ H_{1} right sided: $m > m_0$ H_{1} left sided: $m < m_0$  H_{1} two sided: $\pi \neq \pi_0$ H_{1} right sided: $\pi > \pi_0$ H_{1} left sided: $\pi < \pi_0$  
Assumptions  Assumptions  Assumptions  


 
Test statistic  Test statistic  Test statistic  
$X^2 = \sum{\frac{(\mbox{observed cell count}  \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells.  Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
 $z = \dfrac{p  \pi_0}{\sqrt{\dfrac{\pi_0(1  \pi_0)}{N}}}$
Here $p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size, and $\pi_0$ is the population proportion of successes according to the null hypothesis.  
Sampling distribution of $X^2$ if H_{0} were true  Sampling distribution of $W_1$ and of $W_2$ if H_{0} were true  Sampling distribution of $z$ if H_{0} were true  
Approximately the chisquared distribution with $(I  1) \times (J  1)$ degrees of freedom  Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1  \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.  Approximately the standard normal distribution  
Significant?  Significant?  Significant?  
 For large samples, the table for standard normal probabilities can be used: Two sided:
 Two sided:
 
n.a.  n.a.  Approximate $C\%$ confidence interval for $\pi$  
    Regular (large sample):
 
n.a.  n.a.  Equivalent to  
   
 
Example context  Example context  Example context  
Is there an association between economic class and gender? Is the distribution of economic class different between men and women?  Is the median mental health score of office workers different from $m_0 = 50$?  Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? Use the normal approximation for the sampling distribution of the test statistic.  
SPSS  SPSS  SPSS  
Analyze > Descriptive Statistics > Crosstabs...
 Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
 Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
 
Jamovi  Jamovi  Jamovi  
Frequencies > Independent Samples  $\chi^2$ test of association
 TTests > One Sample TTest
 Frequencies > 2 Outcomes  Binomial test
 
Practice questions  Practice questions  Practice questions  