This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
Sign test
Pearson correlation
$z$ test for the difference between two proportions
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
Dependent variable
Variable 2
Dependent variable
Dependent variable
One of ordinal level
One quantitative of interval or ratio level
One categorical with 2 independent groups
One quantitative of interval or ratio level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
H_{0}: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
H_{0}: the population median of the difference scores is equal to zero
A difference score is the difference between the first score of a pair and the second score of a pair.
H_{0}: $\rho = \rho_0$
$\rho$ is the unknown Pearson correlation in the population, $\rho_0$ is the correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
H_{0}: $\pi_1 = \pi_2$
$\pi_1$ is the population proportion of 'successes' for group 1; $\pi_2$ is the population proportion of 'successes' for group 2
ANOVA $F$ tests:
H_{0} for main and interaction effects together (model): no main effects and interaction effect
H_{0} for independent variable A: no main effect for A
H_{0} for independent variable B: no main effect for B
H_{0} for the interaction term: no interaction effect between A and B
We could also perform $t$ tests for specific contrasts and multiple comparisons, just like we did with one way ANOVA. However, this is more advanced stuff.
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
H_{1} two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
H_{1} right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
H_{1} left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
H_{1} two sided: the population median of the difference scores is different from zero
H_{1} right sided: the population median of the difference scores is larger than zero
H_{1} left sided: the population median of the difference scores is smaller than zero
H_{1} two sided: $\rho \neq \rho_0$
H_{1} right sided: $\rho > \rho_0$
H_{1} left sided: $\rho < \rho_0$
H_{1} two sided: $\pi_1 \neq \pi_2$
H_{1} right sided: $\pi_1 > \pi_2$
H_{1} left sided: $\pi_1 < \pi_2$
ANOVA $F$ tests:
H_{1} for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
H_{1} for independent variable A: there is a main effect for A
H_{1} for independent variable B: there is a main effect for B
H_{1} for the interaction term: there is an interaction effect between A and B
Assumptions
Assumptions of test for correlation
Assumptions
Assumptions
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
Significance test: number of successes and number of failures are each 5 or more in both sample groups
Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Test statistic
Test statistic
Test statistic
Test statistic
$W = $ number of difference scores that is larger than 0
Test statistic for testing H0: $\rho = 0$:
$t = \dfrac{r \times \sqrt{N  2}}{\sqrt{1  r^2}} $
where $r$ is the sample correlation $r = \frac{1}{N  1} \sum_{j}\Big(\frac{x_{j}  \bar{x}}{s_x} \Big) \Big(\frac{y_{j}  \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
$r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1  r} \Bigg )$, where $r$ is the sample correlation
$\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1  \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
$z = \dfrac{p_1  p_2}{\sqrt{p(1  p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$,
$p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$,
$p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$,
$n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2
Note: we could just as well compute $p_2  p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
For main and interaction effects together (model):
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1p)} = \sqrt{n \times 0.5(1  0.5)}$. Hence, if $n$ is large, the standardized test statistic
$$z = \frac{W  n \times 0.5}{\sqrt{n \times 0.5(1  0.5)}}$$
follows approximately the standard normal distribution if the null hypothesis were true.
Sampling distribution of $t$:
$t$ distribution with $N  2$ degrees of freedom
Sampling distribution of $z$:
Approximately the standard normal distribution
Approximately the standard normal distribution
For main and interaction effects together (model):
$F$ distribution with $(I  1) + (J  1) + (I  1) \times (J  1)$ (df model, numerator) and $N  (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
$F$ distribution with $I  1$ (df A, numerator) and $N  (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
$F$ distribution with $J  1$ (df B, numerator) and $N  (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
$F$ distribution with $(I  1) \times (J  1)$ (df interaction, numerator) and $N  (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size
Significant?
Significant?
Significant?
Significant?
If $n$ is small, the table for the binomial distribution should be used:
Two sided:
Check if $W$ observed in sample is in the rejection region or
Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $W$ observed in sample is in the rejection region or
Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $W$ observed in sample is in the rejection region or
Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$
If $n$ is large, the table for standard normal probabilities can be used:
Two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$z$ Test two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$z$ Test right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$z$ Test left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
n.a.
Approximate $C$% confidence interval for $\rho$
Approximate $C\%$ confidence interval for $\pi_1  \pi_2$
n.a.

First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1  r} \Bigg )$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get the approximate $C$% confidence interval for $\rho$:
$(p_1  p_2) \pm z^* \times \sqrt{\dfrac{p_1(1  p_1)}{n_1} + \dfrac{p_2(1  p_2)}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
$(p_{1.plus}  p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1  p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1  p_{2.plus})}{n_2 + 2}}$
where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)

n.a.
Properties of the Pearson correlation coefficient
n.a.
Effect size

The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
The Pearson correlation coefficient can take on values between 1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable). For example:
the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y  6$.
the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $3x + 5$ and $2y  6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $3$.
The Pearson correlation coefficient does not say anything about causality.
The Pearson correlation coefficient is sensitive to outliers.

Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
$$
\begin{align}
R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}
\end{align}
$$
$R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
$$
\begin{align}
\eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\
\\
\eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\
\\
\eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}}
\end{align}
$$
$\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$
\begin{align}
\omega^2_A &= \dfrac{\mbox{sum of squares A}  \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_B &= \dfrac{\mbox{sum of squares B}  \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_{int} &= \dfrac{\mbox{sum of squares int}  \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\end{align}
$$
$\omega^2$ is a better estimate of the explained variance in the population than
$\eta^2$. Only for balanced designs (equal sample sizes).
Proportion variance explained $\eta^2_{partial}$:
$$
\begin{align}
\eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}}
\end{align}
$$
OLS regression with two, categorical independent variables and the interaction term, transformed into $(I  1)$ + $(J  1)$ + $(I  1) \times (J  1)$ code variables.
Example context
Example context
Example context
Example context
Do people tend to score higher on mental health after a mindfulness course?
Is there a linear relationship between physical health and mental health?
Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
Put the two paired variables in the boxes below Variable 1 and Variable 2
Under Test Type, select the Sign test
Analyze > Correlate > Bivariate...
Put your two variables in the box below Variables
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chisquared test instead. The $p$ value resulting from this chisquared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
Click the Statistics... button, and click on the square in front of Chisquare
Continue and click OK
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA  Friedman
Put the two paired variables in the box below Measures
Regression > Correlation Matrix
Put your two variables in the white box at the right
Under Correlation Coefficients, select Pearson (selected by default)
Under Hypothesis, select your alternative hypothesis
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chisquared test instead. The $p$ value resulting from this chisquared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples  $\chi^2$ test of association
Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors