# Two way ANOVA - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two way ANOVA | Friedman test | One sample $z$ test for the mean |
You cannot compare more than 3 methods |
---|---|---|---|

Independent/grouping variables | Independent/grouping variable | Independent variable | |

Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$) | One within subject factor ($\geq 2$ related groups) | None | |

Dependent variable | Dependent variable | Dependent variable | |

One quantitative of interval or ratio level | One of ordinal level | One quantitative of interval or ratio level | |

Null hypothesis | Null hypothesis | Null hypothesis | |

ANOVA $F$ tests:
- H
_{0}for main and interaction effects together (model): no main effects and interaction effect - H
_{0}for independent variable A: no main effect for A - H
_{0}for independent variable B: no main effect for B - H
_{0}for the interaction term: no interaction effect between A and B
| H_{0}: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H_{0}: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | |

Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |

ANOVA $F$ tests:
- H
_{1}for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect - H
_{1}for independent variable A: there is a main effect for A - H
_{1}for independent variable B: there is a main effect for B - H
_{1}for the interaction term: there is an interaction effect between A and B
| H_{1}: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups
| H_{1} two sided: $\mu \neq \mu_0$H _{1} right sided: $\mu > \mu_0$H _{1} left sided: $\mu < \mu_0$
| |

Assumptions | Assumptions | Assumptions | |

- Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
- The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
- For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
- Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
| - Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
| - Scores are normally distributed in the population
- Population standard deviation $\sigma$ is known
- Sample is a simple random sample from the population. That is, observations are independent of one another
| |

Test statistic | Test statistic | Test statistic | |

For main and interaction effects together (model):
- $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
- $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
- $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
- $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
| $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | $z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size. The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$. | |

Pooled standard deviation | n.a. | n.a. | |

$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | - | - | |

Sampling distribution of $F$ if H_{0} were true | Sampling distribution of $Q$ if H_{0} were true | Sampling distribution of $z$ if H_{0} were true | |

For main and interaction effects together (model):
- $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
- $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
- $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
- $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
| If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | Standard normal distribution | |

Significant? | Significant? | Significant? | |

- Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
- Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
- Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
- Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
| Two sided:
- Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
- Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
- Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
- Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
- Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
- Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
| |

n.a. | n.a. | $C\%$ confidence interval for $\mu$ | |

- | - | $\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu$ can also be used as significance test. | |

Effect size | n.a. | Effect size | |

*Proportion variance explained $R^2$:* Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together: $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
*Proportion variance explained $\eta^2$:* Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect: $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
*Proportion variance explained $\omega^2$:* Corrects for the positive bias in $\eta^2$ and is equal to: $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).
*Proportion variance explained $\eta^2_{partial}$:*$$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
| - | Cohen's $d$:Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | |

n.a. | n.a. | Visual representation | |

- | - | ||

ANOVA table | n.a. | n.a. | |

- | - | ||

Equivalent to | n.a. | n.a. | |

OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables. | - | - | |

Example context | Example context | Example context | |

Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender? | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$ | |

SPSS | SPSS | n.a. | |

Analyze > General Linear Model > Univariate...
- Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
- Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
- Under Test Type, select the Friedman test
| - | |

Jamovi | Jamovi | n.a. | |

ANOVA > ANOVA
- Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
| ANOVA > Repeated Measures ANOVA - Friedman
- Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
| - | |

Practice questions | Practice questions | Practice questions | |