Two way ANOVA - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two way ANOVA
Chi-squared test for the relationship between two categorical variables
One sample $z$ test for the mean
Friedman test
Mann-Whitney-Wilcoxon test
Mann-Whitney-Wilcoxon test
Logistic regression
Independent/grouping variablesIndependent /column variableIndependent variableIndependent/grouping variableIndependent/grouping variableIndependent/grouping variableIndependent variables
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)One categorical with $I$ independent groups ($I \geqslant 2$)NoneOne within subject factor ($\geq 2$ related groups)One categorical with 2 independent groupsOne categorical with 2 independent groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Dependent variableDependent /row variableDependent variableDependent variableDependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne of ordinal levelOne of ordinal levelOne of ordinal levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
ANOVA $F$ tests:
  • H0 for main and interaction effects together (model): no main effects and interaction effect
  • H0 for independent variable A: no main effect for A
  • H0 for independent variable B: no main effect for B
  • H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H0: there is no association between the row and column variable

More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
  • H0: the distribution of the dependent variable is the same in each of the $I$ populations
If there is one random sample of size $N$ from the total population:
  • H0: the row and column variables are independent
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups

Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • H0: the population median for group 1 is equal to the population median for group 2
Else:
Formulation 1:
  • H0: the population scores in group 1 are not systematically higher or lower than the population scores in group 2
Formulation 2:
  • H0: P(an observation from population 1 exceeds an observation from population 2) = P(an observation from population 2 exceeds observation from population 1)
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • H0: the population median for group 1 is equal to the population median for group 2
Else:
Formulation 1:
  • H0: the population scores in group 1 are not systematically higher or lower than the population scores in group 2
Formulation 2:
  • H0: P(an observation from population 1 exceeds an observation from population 2) = P(an observation from population 2 exceeds observation from population 1)
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Model chi-squared test for the complete regression model:
  • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
ANOVA $F$ tests:
  • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
  • H1 for independent variable A: there is a main effect for A
  • H1 for independent variable B: there is a main effect for B
  • H1 for the interaction term: there is an interaction effect between A and B
H1: there is an association between the row and column variable

More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
  • H1: the distribution of the dependent variable is not the same in all of the $I$ populations
If there is one random sample of size $N$ from the total population:
  • H1: the row and column variables are dependent
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • H1 two sided: the population median for group 1 is not equal to the population median for group 2
  • H1 right sided: the population median for group 1 is larger than the population median for group 2
  • H1 left sided: the population median for group 1 is smaller than the population median for group 2
Else:
Formulation 1:
  • H1 two sided: the population scores in group 1 are systematically higher or lower than the population scores in group 2
  • H1 right sided: the population scores in group 1 are systematically higher than the population scores in group 2
  • H1 left sided: the population scores in group 1 are systematically lower than the population scores in group 2
Formulation 2:
  • H1 two sided: P(an observation from population 1 exceeds an observation from population 2) $\neq$ P(an observation from population 2 exceeds an observation from population 1)
  • H1 right sided: P(an observation from population 1 exceeds an observation from population 2) > P(an observation from population 2 exceeds an observation from population 1)
  • H1 left sided: P(an observation from population 1 exceeds an observation from population 2) < P(an observation from population 2 exceeds an observation from population 1)
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • H1 two sided: the population median for group 1 is not equal to the population median for group 2
  • H1 right sided: the population median for group 1 is larger than the population median for group 2
  • H1 left sided: the population median for group 1 is smaller than the population median for group 2
Else:
Formulation 1:
  • H1 two sided: the population scores in group 1 are systematically higher or lower than the population scores in group 2
  • H1 right sided: the population scores in group 1 are systematically higher than the population scores in group 2
  • H1 left sided: the population scores in group 1 are systematically lower than the population scores in group 2
Formulation 2:
  • H1 two sided: P(an observation from population 1 exceeds an observation from population 2) $\neq$ P(an observation from population 2 exceeds an observation from population 1)
  • H1 right sided: P(an observation from population 1 exceeds an observation from population 2) > P(an observation from population 2 exceeds an observation from population 1)
  • H1 left sided: P(an observation from population 1 exceeds an observation from population 2) < P(an observation from population 2 exceeds an observation from population 1)
Model chi-squared test for the complete regression model:
  • H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • H1 right sided: $\beta_k > 0$
  • H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
  • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
  • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
  • Sample size is large enough for $X^2$ to be approximately chi-squared distributed under the null hypothesis. Rule of thumb:
    • 2 $\times$ 2 table: all four expected cell counts are 5 or more
    • Larger than 2 $\times$ 2 tables: average of the expected cell counts is 5 or more, smallest expected cell count is 1 or more
  • There are $I$ independent simple random samples from each of $I$ populations defined by the independent variable, or there is one simple random sample from the total population
  • Scores are normally distributed in the population
  • Population standard deviation $\sigma$ is known
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
Test statisticTest statisticTest statisticTest statisticTest statisticTest statisticTest statistic
For main and interaction effects together (model):
  • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
  • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
  • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
  • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$

Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.

Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.

Note: if ties are present in the data, the formula for $Q$ is more complicated.
Two different types of test statistics can be used; both will result in the same test outcome. The first is the Wilcoxon rank sum statistic $W$: The second type of test statistic is the Mann-Whitney $U$ statistic:
  • $U = W - \dfrac{n_1(n_1 + 1)}{2}$
where $n_1$ is the sample size of group 1.

Note: we could just as well base W and U on group 2. This would only 'flip' the right and left sided alternative hypotheses. Also, tables with critical values for $U$ are often based on the smaller of $U$ for group 1 and for group 2.
Two different types of test statistics can be used; both will result in the same test outcome. The first is the Wilcoxon rank sum statistic $W$: The second type of test statistic is the Mann-Whitney $U$ statistic:
  • $U = W - \dfrac{n_1(n_1 + 1)}{2}$
where $n_1$ is the sample size of group 1.

Note: we could just as well base W and U on group 2. This would only 'flip' the right and left sided alternative hypotheses. Also, tables with critical values for $U$ are often based on the smaller of $U$ for group 1 and for group 2.
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
Pooled standard deviationn.a.n.a.n.a.n.a.n.a.n.a.
$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ ------
Sampling distribution of $F$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $W$ and of $U$ if H0 were trueSampling distribution of $W$ and of $U$ if H0 were trueSampling distribution of $X^2$ and of the Wald statistic if H0 were true
For main and interaction effects together (model):
  • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
  • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
  • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
  • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
Approximately the chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedomStandard normal distributionIf the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.

For small samples, the exact distribution of $Q$ should be used.

Sampling distribution of $W$:
For large samples, $W$ is approximately normally distributed with mean $\mu_W$ and standard deviation $\sigma_W$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_W &= \dfrac{n_1(n_1 + n_2 + 1)}{2}\\ \sigma_W &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_W = \dfrac{W - \mu_W}{\sigma_W}\\ $$ follows approximately the standard normal distribution if the null hypothesis were true. Note that if your $W$ value is based on group 2, $\mu_W$ becomes $\frac{n_2(n_1 + n_2 + 1)}{2}$.

Sampling distribution of $U$:
For large samples, $U$ is approximately normally distributed with mean $\mu_U$ and standard deviation $\sigma_U$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_U &= \dfrac{n_1 n_2}{2}\\ \sigma_U &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_U = \dfrac{U - \mu_U}{\sigma_U}\\ $$ follows approximately the standard normal distribution if the null hypothesis were true.

For small samples, the exact distribution of $W$ or $U$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_W$ and $\sigma_U$ is more complicated.

Sampling distribution of $W$:
For large samples, $W$ is approximately normally distributed with mean $\mu_W$ and standard deviation $\sigma_W$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_W &= \dfrac{n_1(n_1 + n_2 + 1)}{2}\\ \sigma_W &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_W = \dfrac{W - \mu_W}{\sigma_W}\\ $$ follows approximately the standard normal distribution if the null hypothesis were true. Note that if your $W$ value is based on group 2, $\mu_W$ becomes $\frac{n_2(n_1 + n_2 + 1)}{2}$.

Sampling distribution of $U$:
For large samples, $U$ is approximately normally distributed with mean $\mu_U$ and standard deviation $\sigma_U$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_U &= \dfrac{n_1 n_2}{2}\\ \sigma_U &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_U = \dfrac{U - \mu_U}{\sigma_U}\\ $$ follows approximately the standard normal distribution if the null hypothesis were true.

For small samples, the exact distribution of $W$ or $U$ should be used.

Note: if ties are present in the data, the formula for the standard deviations $\sigma_W$ and $\sigma_U$ is more complicated.
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Significant?Significant?Significant?Significant?Significant?Significant?Significant?
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
n.a.n.a.$C\%$ confidence interval for $\mu$n.a.n.a.n.a.Wald-type approximate $C\%$ confidence interval for $\beta_k$
--$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
---$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Effect sizen.a.Effect sizen.a.n.a.n.a.Goodness of fit measure $R^2_L$
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
    $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\eta^2$:
    Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
    $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to:
    $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

  • Proportion variance explained $\eta^2_{partial}$: $$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
---$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
n.a.n.a.Visual representationn.a.n.a.n.a.n.a.
--
One sample z test
----
ANOVA tablen.a.n.a.n.a.n.a.n.a.n.a.
two way ANOVA table
------
Equivalent ton.a.n.a.n.a.Equivalent toEquivalent ton.a.
OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.---If there are no ties in the data, the two sided Mann-Whitney-Wilcoxon test is equivalent to the Kruskal-Wallis test with an independent variable with 2 levels ($I$ = 2).If there are no ties in the data, the two sided Mann-Whitney-Wilcoxon test is equivalent to the Kruskal-Wallis test with an independent variable with 2 levels ($I$ = 2).-
Example contextExample contextExample contextExample contextExample contextExample contextExample context
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?Is there an association between economic class and gender? Is the distribution of economic class different between men and women?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?Do men tend to score higher on social economic status than women? Do men tend to score higher on social economic status than women? Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
SPSSSPSSn.a.SPSSSPSSSPSSSPSS
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Descriptive Statistics > Crosstabs...
  • Put one of your two categorical variables in the box below Row(s), and the other categorical variable in the box below Column(s)
  • Click the Statistics... button, and click on the square in front of Chi-square
  • Continue and click OK
-Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
  • Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
  • Under Test Type, select the Friedman test
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples...
  • Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples...
  • Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
JamoviJamovin.a.JamoviJamoviJamoviJamovi
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
Frequencies > Independent Samples - $\chi^2$ test of association
  • Put one of your two categorical variables in the box below Rows, and the other categorical variable in the box below Columns
-ANOVA > Repeated Measures ANOVA - Friedman
  • Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
T-Tests > Independent Samples T-Test
  • Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Mann-Whitney U
  • Under Hypothesis, select your alternative hypothesis
T-Tests > Independent Samples T-Test
  • Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Mann-Whitney U
  • Under Hypothesis, select your alternative hypothesis
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Practice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questions