This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
Two way ANOVA
Chisquared test for the relationship between two categorical variables
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
One categorical with $I$ independent groups ($I \geqslant 2$)
None
One within subject factor ($\geq 2$ related groups)
One categorical with 2 independent groups
One categorical with $I$ independent groups ($I \geqslant 2$)
Dependent variable
Dependent /row variable
Dependent variable
Dependent variable
Dependent variable
Dependent variable
One quantitative of interval or ratio level
One categorical with $J$ independent groups ($J \geqslant 2$)
One quantitative of interval or ratio level
One of ordinal level
One quantitative of interval or ratio level
One quantitative of interval or ratio level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
ANOVA $F$ tests:
H_{0} for main and interaction effects together (model): no main effects and interaction effect
H_{0} for independent variable A: no main effect for A
H_{0} for independent variable B: no main effect for B
H_{0} for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H_{0}: there is no association between the row and column variable
More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
H_{0}: the distribution of the dependent variable is the same in each of the $I$ populations
If there is one random sample of size $N$ from the total population:
H_{0}: the row and column variables are independent
H_{0}: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H_{0}: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H_{0}: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
ANOVA $F$ test:
H_{0}: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
H_{0}: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
H_{0}: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
ANOVA $F$ tests:
H_{1} for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
H_{1} for independent variable A: there is a main effect for A
H_{1} for independent variable B: there is a main effect for B
H_{1} for the interaction term: there is an interaction effect between A and B
H_{1}: there is an association between the row and column variable
More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
H_{1}: the distribution of the dependent variable is not the same in all of the $I$ populations
If there is one random sample of size $N$ from the total population:
H_{1}: the row and column variables are dependent
H_{1} two sided: $\mu \neq \mu_0$
H_{1} right sided: $\mu > \mu_0$
H_{1} left sided: $\mu < \mu_0$
H_{1}: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups
H_{1} two sided: $\mu_1 \neq \mu_2$
H_{1} right sided: $\mu_1 > \mu_2$
H_{1} left sided: $\mu_1 < \mu_2$
ANOVA $F$ test:
H_{1}: not all population means are equal
$t$ Test for contrast:
H_{1} two sided: $\Psi \neq 0$
H_{1} right sided: $\Psi > 0$
H_{1} left sided: $\Psi < 0$
$t$ Test multiple comparisons:
H_{1}  usually two sided: $\mu_g \neq \mu_h$
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Sample size is large enough for $X^2$ to be approximately chisquared distributed under the null hypothesis. Rule of thumb:
2 $\times$ 2 table: all four expected cell counts are 5 or more
Larger than 2 $\times$ 2 tables: average of the expected cell counts is 5 or more, smallest expected cell count is 1 or more
There are $I$ independent simple random samples from each of $I$ populations defined by the independent variable, or there is one simple random sample from the total population
Scores are normally distributed in the population
Population standard deviation $\sigma$ is known
Sample is a simple random sample from the population. That is, observations are independent of one another
Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
Within each population, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Within each population, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
For main and interaction effects together (model):
Note: mean square error is also known as mean square residual or mean square within.
$X^2 = \sum{\frac{(\mbox{observed cell count}  \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells.
$z = \dfrac{\bar{y}  \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.
Here $N$ is the number of 'blocks' (usually the subjects  so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.
Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.
Note: if ties are present in the data, the formula for $Q$ is more complicated.
$t = \dfrac{(\bar{y}_1  \bar{y}_2)  0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1  \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2,
$s_p$ is the pooled standard deviation,
$n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
Note: we could just as well compute $\bar{y}_2  \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
ANOVA $F$ test:
$\begin{aligned}[t]
F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean}  \mbox{overall mean})^2 / (I  1)}{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2 / (N  I)}\\
&= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\
&= \dfrac{\mbox{mean square between}}{\mbox{mean square error}}
\end{aligned}
$
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
$t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
$t = \dfrac{\bar{y}_g  \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$,
$s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA,
$n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
Pooled standard deviation
n.a.
n.a.
n.a.
Pooled standard deviation
Pooled standard deviation
$
\begin{aligned}
s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2}{N  (I \times J)}}\\
&= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\
&= \sqrt{\mbox{mean square error}}
\end{aligned}
$
Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$F$ test:
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)
$t$ Test for contrast two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test for contrast right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test for contrast left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test multiple comparisons two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
n.a.
n.a.
$C\%$ confidence interval for $\mu$
n.a.
$C\%$ confidence interval for $\mu_1  \mu_2$
$C\%$ confidence interval for $\Psi$, for $\mu_g  \mu_h$, and for $\mu_i$


$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
$(\bar{y}_1  \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2  2}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
$c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
where the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g  \mu_h$ (multiple comparisons):
$(\bar{y}_g  \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
where $t^{**}$ depends upon $C$, degrees of freedom ($N  I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* = $ the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
$\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
where $\bar{y}_i$ is the sample mean in group $i$, $n_i$ is the sample size of group $i$, and the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Effect size
n.a.
Effect size
n.a.
Effect size
Effect size
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
$$
\begin{align}
R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}
\end{align}
$$
$R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
$$
\begin{align}
\eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\
\\
\eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\
\\
\eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}}
\end{align}
$$
$\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$
\begin{align}
\omega^2_A &= \dfrac{\mbox{sum of squares A}  \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_B &= \dfrac{\mbox{sum of squares B}  \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_{int} &= \dfrac{\mbox{sum of squares int}  \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\end{align}
$$
$\omega^2$ is a better estimate of the explained variance in the population than
$\eta^2$. Only for balanced designs (equal sample sizes).
Proportion variance explained $\eta^2_{partial}$:
$$
\begin{align}
\eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}}
\end{align}
$$

Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$:
$$d = \frac{\bar{y}  \mu_0}{\sigma}$$
Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$

Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$:
$$d = \frac{\bar{y}_1  \bar{y}_2}{s_p}$$
Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other.
Proportion variance explained $\eta^2$ and $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variable:
$$
\begin{align}
\eta^2 = R^2
&= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}}
\end{align}
$$
Only in one way ANOVA $\eta^2 = R^2.$ $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$\omega^2 = \frac{\mbox{sum of squares between}  \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$
$\omega^2$ is a better estimate of the explained variance in the population than $\eta^2.$
Cohen's $d$:
Standardized difference between the mean in group $g$ and in group $h$:
$$d_{g,h} = \frac{\bar{y}_g  \bar{y}_h}{s_p}$$
Cohen's $d$ indicates how many standard deviations $s_p$ two sample means are removed from each other.
OLS regression with two categorical independent variables and the interaction term, transformed into $(I  1)$ + $(J  1)$ + $(I  1) \times (J  1)$ code variables.



One way ANOVA with an independent variable with 2 levels ($I$ = 2):
two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
two sided two sample $t$ test is equivalent to $F$ test regression model
two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
OLS regression with one categorical independent variable transformed into $I  1$ code variables:
$F$ test ANOVA is equivalent to $F$ test regression model
$t$ test for contrast $i$ is equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
Example context
Example context
Example context
Example context
Example context
Example context
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
Is there an association between economic class and gender? Is the distribution of economic class different between men and women?
Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$
Is there a difference in depression level between measurement point 1 (preintervention), measurement point 2 (1 week postintervention), and measurement point 3 (6 weeks postintervention)?
Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.
Is the average mental health score different between people from a low, moderate, and high economic class?
SPSS
SPSS
n.a.
SPSS
SPSS
SPSS
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Descriptive Statistics > Crosstabs...
Put one of your two categorical variables in the box below Row(s), and the other categorical variable in the box below Column(s)
Click the Statistics... button, and click on the square in front of Chisquare
Continue and click OK

Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
Under Test Type, select the Friedman test
Analyze > Compare Means > IndependentSamples T Test...
Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
Continue and click OK
Analyze > Compare Means > OneWay ANOVA...
Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
Jamovi
Jamovi
n.a.
Jamovi
Jamovi
Jamovi
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
Frequencies > Independent Samples  $\chi^2$ test of association
Put one of your two categorical variables in the box below Rows, and the other categorical variable in the box below Columns

ANOVA > Repeated Measures ANOVA  Friedman
Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
TTests > Independent Samples TTest
Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Under Tests, select Student's (selected by default)
Under Hypothesis, select your alternative hypothesis
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors