This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One categorical with $I$ independent groups ($I \geqslant 2$)
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
One categorical with 2 independent groups
Dependent variable
Dependent variable
Dependent variable
Dependent variable
One categorical with $J$ independent groups ($J \geqslant 2$)
One of ordinal level
One categorical with 2 independent groups
One quantitative of interval or ratio level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$,
the probability of drawing an observation from condition $J$ is $\pi_J$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
H0:
P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Model chi-squared test for the complete regression model:
H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
H0: $\beta_k = 0$
or in terms of odds ratio:
H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
H0: $\beta_k = 0$
or in terms of odds ratio:
H0: $e^{\beta_k} = 1$
in the regression equation
$
\ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K
$. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
H1: the population proportions are not all as specified under the null hypothesis
or equivalently
H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
H1:
the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
H1:
for at least one pair of groups:
P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
Model chi-squared test for the complete regression model:
H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
H1: $\beta_k \neq 0$
or in terms of odds ratio:
H1: $e^{\beta_k} \neq 1$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
H1 right sided: $\beta_k > 0$
H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
H1: $\beta_k \neq 0$
or in terms of odds ratio:
H1: $e^{\beta_k} \neq 1$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
Assumptions
Assumptions
Assumptions
Assumptions
Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
Sample is a simple random sample from the population. That is, observations are independent of one another
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Within each population, the scores on the dependent variable are normally distributed
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.
Note: if ties are present in the data, the formula for $H$ is more complicated.
Model chi-squared test for the complete regression model:
$X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
$D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.
Likelihood ratio chi-squared test for individual $\beta_k$:
$X^2 = D_{K-1} - D_K$
$D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2,
$s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2,
$n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Approximately the chi-squared distribution with $J - 1$ degrees of freedom
For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.
For small samples, the exact distribution of $H$ should be used.
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
chi-squared distribution with 1 degree of freedom
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1
First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
n.a.
n.a.
Wald-type approximate $C\%$ confidence interval for $\beta_k$
Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$
-
-
$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
-
n.a.
n.a.
n.a.
Visual representation
-
-
-
Example context
Example context
Example context
Example context
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?
Do people from different religions tend to score differently on social economic status?
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
Is the average mental health score different between men and women?
Put your categorical variable in the box below Test Variable List
Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
Continue and click OK
Analyze > Regression > Binary Logistic...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Compare Means > Independent-Samples T Test...
Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
Continue and click OK
Jamovi
Jamovi
Jamovi
Jamovi
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
Put your categorical variable in the box below Variable
Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
ANOVA > One Way ANOVA - Kruskal-Wallis
Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Regression > 2 Outcomes - Binomial
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
T-Tests > Independent Samples T-Test
Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Under Tests, select Welch's
Under Hypothesis, select your alternative hypothesis