This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Two way ANOVA
Friedman test
Goodness of fit test
$z$ test for the difference between two proportions
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
One within subject factor ($\geq 2$ related groups)
None
One categorical with 2 independent groups
One quantitative of interval or ratio level
One categorical with 2 independent groups
Dependent variable
Dependent variable
Dependent variable
Dependent variable
Variable 2
Dependent variable
One quantitative of interval or ratio level
One of ordinal level
One categorical with $J$ independent groups ($J \geqslant 2$)
One categorical with 2 independent groups
One quantitative of interval or ratio level
One quantitative of interval or ratio level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
ANOVA $F$ tests:
H0 for main and interaction effects together (model): no main effects and interaction effect
H0 for independent variable A: no main effect for A
H0 for independent variable B: no main effect for B
H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$,
the probability of drawing an observation from condition $J$ is $\pi_J$
H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
H0: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
ANOVA $F$ tests:
H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
H1 for independent variable A: there is a main effect for A
H1 for independent variable B: there is a main effect for B
H1 for the interaction term: there is an interaction effect between A and B
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups
H1: the population proportions are not all as specified under the null hypothesis
or equivalently
H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
H1 two sided: $\rho \neq \rho_0$
H1 right sided: $\rho > \rho_0$
H1 left sided: $\rho < \rho_0$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions of test for correlation
Assumptions
Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
Sample is a simple random sample from the population. That is, observations are independent of one another
Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
Significance test: number of successes and number of failures are each 5 or more in both sample groups
Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
Within each population, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
For main and interaction effects together (model):
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.
Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.
Note: if ties are present in the data, the formula for $Q$ is more complicated.
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$,
$p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$,
$p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$,
$n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
Test statistic for testing H0: $\rho = 0$:
$t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}} $
where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
$r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
$\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2,
$s_p$ is the pooled standard deviation,
$n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Pooled standard deviation
n.a.
n.a.
n.a.
n.a.
Pooled standard deviation
$
\begin{aligned}
s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\
&= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\
&= \sqrt{\mbox{mean square error}}
\end{aligned}
$
Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$z$ Test two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$z$ Test right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$z$ Test left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
n.a.
n.a.
n.a.
Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$
Approximate $C$% confidence interval for $\rho$
$C\%$ confidence interval for $\mu_1 - \mu_2$
-
-
-
Regular (large sample):
$(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
$(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get the approximate $C$% confidence interval for $\rho$:
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
$$
\begin{align}
R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}
\end{align}
$$
$R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
$$
\begin{align}
\eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\
\\
\eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\
\\
\eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}}
\end{align}
$$
$\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$
\begin{align}
\omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\end{align}
$$
$\omega^2$ is a better estimate of the explained variance in the population than
$\eta^2$. Only for balanced designs (equal sample sizes).
Proportion variance explained $\eta^2_{partial}$:
$$
\begin{align}
\eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}}
\end{align}
$$
-
-
-
The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable). For example:
the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y - 6$.
the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $-3x + 5$ and $2y - 6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $-3$.
The Pearson correlation coefficient does not say anything about causality.
The Pearson correlation coefficient is sensitive to outliers.
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$:
$$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$
Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are removed from each other.
n.a.
n.a.
n.a.
n.a.
n.a.
Visual representation
-
-
-
-
-
ANOVA table
n.a.
n.a.
n.a.
n.a.
n.a.
-
-
-
-
-
Equivalent to
n.a.
n.a.
Equivalent to
Equivalent to
Equivalent to
OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.
Results significance test ($t$ and $p$ value) testing $H_0$: $\beta_1 = 0$ are equivalent to results significance test testing $H_0$: $\rho = 0$
One way ANOVA with an independent variable with 2 levels ($I$ = 2):
two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
two sided two sample $t$ test is equivalent to $F$ test regression model
two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
Example context
Example context
Example context
Example context
Example context
Example context
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?
Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.
Is there a linear relationship between physical health and mental health?
Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.
SPSS
SPSS
SPSS
SPSS
SPSS
SPSS
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
Put your categorical variable in the box below Test Variable List
Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
Click the Statistics... button, and click on the square in front of Chi-square
Continue and click OK
Analyze > Correlate > Bivariate...
Put your two variables in the box below Variables
Analyze > Compare Means > Independent-Samples T Test...
Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
Continue and click OK
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
ANOVA > Repeated Measures ANOVA - Friedman
Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
Put your categorical variable in the box below Variable
Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
Regression > Correlation Matrix
Put your two variables in the white box at the right
Under Correlation Coefficients, select Pearson (selected by default)
Under Hypothesis, select your alternative hypothesis
T-Tests > Independent Samples T-Test
Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Under Tests, select Student's (selected by default)
Under Hypothesis, select your alternative hypothesis