This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One within subject factor ($\geq 2$ related groups)
One quantitative of interval or ratio level
None
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
None
Dependent variable
Variable 2
Dependent variable
Dependent variable
Dependent variable
One categorical with 2 independent groups
One quantitative of interval or ratio level
One quantitative of interval or ratio level
One quantitative of interval or ratio level
One of ordinal level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
H0: $\rho = \rho_0$
Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
ANOVA $F$ tests:
H0 for main and interaction effects together (model): no main effects and interaction effect
H0 for independent variable A: no main effect for A
H0 for independent variable B: no main effect for B
H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis.
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
H1: not all population proportions are equal
H1 two sided: $\rho \neq \rho_0$
H1 right sided: $\rho > \rho_0$
H1 left sided: $\rho < \rho_0$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
ANOVA $F$ tests:
H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
H1 for independent variable A: there is a main effect for A
H1 for independent variable B: there is a main effect for B
H1 for the interaction term: there is an interaction effect between A and B
H1 two sided: $m \neq m_0$
H1 right sided: $m > m_0$
H1 left sided: $m < m_0$
Assumptions
Assumptions of test for correlation
Assumptions
Assumptions
Assumptions
Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
Scores are normally distributed in the population
Sample is a simple random sample from the population. That is, observations are independent of one another
Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
The population distribution of the scores is symmetric
Sample is a simple random sample from the population. That is, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
If a failure is scored as 0 and a success is scored as 1:
Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.
Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
Test statistic for testing H0: $\rho = 0$:
$t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}} $
where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
$r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
$\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.
Note: mean square error is also known as mean square residual or mean square within.
Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score} - m_0)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
For each subject, compute the absolute value of the difference score $|\mbox{score} - m_0|$.
Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:
$W_1 = \sum\, R_d^{+}$
or
$W_1 = \sum\, R_d^{-}$
That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
Tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
$W_2 = \sum\, \mbox{sign}_d \times R_d$
That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
n.a.
n.a.
n.a.
Pooled standard deviation
n.a.
-
-
-
$
\begin{aligned}
s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\
&= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\
&= \sqrt{\mbox{mean square error}}
\end{aligned}
$
Sampling distribution of $W_1$ and of $W_2$ if H0 were true
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom
Sampling distribution of $t$:
$t$ distribution with $N - 2$ degrees of freedom
Sampling distribution of $z$:
Approximately the standard normal distribution
$t$ distribution with $N - 1$ degrees of freedom
For main and interaction effects together (model):
$F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
$F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
$F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
$F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here
$$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$
$$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$
Hence, if $N_r$ is large, the standardized test statistic
$$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$
follows approximately the standard normal distribution if the null hypothesis were true.
Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here
$$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$
Hence, if $N_r$ is large, the standardized test statistic
$$z = \frac{W_2}{\sigma_{W_2}}$$
follows approximately the standard normal distribution if the null hypothesis were true.
If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.
Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated.
Significant?
Significant?
Significant?
Significant?
Significant?
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get the approximate $C$% confidence interval for $\rho$:
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable). For example:
the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y - 6$.
the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $-3x + 5$ and $2y - 6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $-3$.
The Pearson correlation coefficient does not say anything about causality.
The Pearson correlation coefficient is sensitive to outliers.
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$:
$$d = \frac{\bar{y} - \mu_0}{s}$$
Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
$$
\begin{align}
R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}
\end{align}
$$
$R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
$$
\begin{align}
\eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\
\\
\eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\
\\
\eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}}
\end{align}
$$
$\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$
\begin{align}
\omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\end{align}
$$
$\omega^2$ is a better estimate of the explained variance in the population than
$\eta^2$. Only for balanced designs (equal sample sizes).
Proportion variance explained $\eta^2_{partial}$:
$$
\begin{align}
\eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}}
\end{align}
$$
-
n.a.
n.a.
Visual representation
n.a.
n.a.
-
-
-
-
n.a.
n.a.
n.a.
ANOVA table
n.a.
-
-
-
-
Equivalent to
Equivalent to
n.a.
Equivalent to
n.a.
Friedman test, with a categorical dependent variable consisting of two independent groups.
Results significance test ($t$ and $p$ value) testing $H_0$: $\beta_1 = 0$ are equivalent to results significance test testing $H_0$: $\rho = 0$
-
OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.
-
Example context
Example context
Example context
Example context
Example context
Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?
Is there a linear relationship between physical health and mental health?
Is the average mental health score of office workers different from $\mu_0 = 50$?
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
Is the median mental health score of office workers different from $m_0 = 50$?
SPSS
SPSS
SPSS
SPSS
SPSS
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
Under Test Type, select Cochran's Q test
Analyze > Correlate > Bivariate...
Put your two variables in the box below Variables
Analyze > Compare Means > One-Sample T Test...
Put your variable in the box below Test Variable(s)
Fill in the value for $\mu_0$ in the box next to Test Value
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
On the Objective tab, choose Customize Analysis
On the Fields tab, specify the variable for which you want to compute the Wilcoxon signed-rank test
On the Settings tab, choose Customize tests and check the box for 'Compare median to hypothesized (Wilcoxon signed-rank test)'. Fill in your $m_0$ in the box next to Hypothesized median
Click Run
Double click on the output table to see the full results
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
Regression > Correlation Matrix
Put your two variables in the white box at the right
Under Correlation Coefficients, select Pearson (selected by default)
Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
Put your variable in the box below Dependent Variables
Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
T-Tests > One Sample T-Test
Put your variable in the box below Dependent Variables
Under Tests, select Wilcoxon rank
Under Hypothesis, fill in the value for $m_0$ in the box next to Test Value, and select your alternative hypothesis