This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One categorical with $I$ independent groups ($I \geqslant 2$)
One within subject factor ($\geq 2$ related groups)
2 paired groups
None
2 paired groups
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)
Dependent variable
Dependent variable
Dependent variable
Dependent variable
Dependent variable
Dependent variable
Dependent variable
Dependent variable
One categorical with $J$ independent groups ($J \geqslant 2$)
One of ordinal level
One categorical with 2 independent groups
One quantitative of interval or ratio level
One quantitative of interval or ratio level
One of ordinal level
One categorical with 2 independent groups
One quantitative of interval or ratio level
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
Null hypothesis
H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$,
the probability of drawing an observation from condition $J$ is $\pi_J$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
H0:
P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\pi_1 = \pi_2 = \ldots = \pi_I$
Here $\pi_1$ is the population proportion of 'successes' for group 1, $\pi_2$ is the population proportion of 'successes' for group 2, and $\pi_I$ is the population proportion of 'successes' for group $I.$
H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair.
H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
H0: the population median of the difference scores is equal to zero
A difference score is the difference between the first score of a pair and the second score of a pair.
Model chi-squared test for the complete regression model:
H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
H0: $\beta_k = 0$
or in terms of odds ratio:
H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
H0: $\beta_k = 0$
or in terms of odds ratio:
H0: $e^{\beta_k} = 1$
in the regression equation
$
\ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K
$. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
ANOVA $F$ tests:
H0 for main and interaction effects together (model): no main effects and interaction effect
H0 for independent variable A: no main effect for A
H0 for independent variable B: no main effect for B
H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
H1: the population proportions are not all as specified under the null hypothesis
or equivalently
H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
H1:
the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
H1:
for at least one pair of groups:
P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
H1: not all population proportions are equal
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
H1 two sided: the population median of the difference scores is different from zero
H1 right sided: the population median of the difference scores is larger than zero
H1 left sided: the population median of the difference scores is smaller than zero
Model chi-squared test for the complete regression model:
H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
H1: $\beta_k \neq 0$
or in terms of odds ratio:
H1: $e^{\beta_k} \neq 1$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
H1 right sided: $\beta_k > 0$
H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
H1: $\beta_k \neq 0$
or in terms of odds ratio:
H1: $e^{\beta_k} \neq 1$
ANOVA $F$ tests:
H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
H1 for independent variable A: there is a main effect for A
H1 for independent variable B: there is a main effect for B
H1 for the interaction term: there is an interaction effect between A and B
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
Sample is a simple random sample from the population. That is, observations are independent of one another
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
Difference scores are normally distributed in the population
Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Scores are normally distributed in the population
Sample is a simple random sample from the population. That is, observations are independent of one another
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.
Note: if ties are present in the data, the formula for $H$ is more complicated.
If a failure is scored as 0 and a success is scored as 1:
Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.
Before computing $Q$, first exclude blocks with equal scores in all $k$ groups.
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores).
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.
$W = $ number of difference scores that is larger than 0
Model chi-squared test for the complete regression model:
$X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
$D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.
Likelihood ratio chi-squared test for individual $\beta_k$:
$X^2 = D_{K-1} - D_K$
$D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
For main and interaction effects together (model):
Sampling distribution of $X^2$ and of the Wald statistic if H0 were true
Sampling distribution of $F$ if H0 were true
Approximately the chi-squared distribution with $J - 1$ degrees of freedom
For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.
For small samples, the exact distribution of $H$ should be used.
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom
$t$ distribution with $N - 1$ degrees of freedom
$t$ distribution with $N - 1$ degrees of freedom
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic
$$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$
follows approximately the standard normal distribution if the null hypothesis were true.
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
chi-squared distribution with 1 degree of freedom
For main and interaction effects together (model):
$F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
$F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
$F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
$F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
n.a.
n.a.
n.a.
$C\%$ confidence interval for $\mu$
$C\%$ confidence interval for $\mu$
n.a.
Wald-type approximate $C\%$ confidence interval for $\beta_k$
n.a.
-
-
-
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
-
n.a.
n.a.
n.a.
Effect size
Effect size
n.a.
Goodness of fit measure $R^2_L$
Effect size
-
-
-
Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$:
$$d = \frac{\bar{y} - \mu_0}{s}$$
Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$:
$$d = \frac{\bar{y} - \mu_0}{s}$$
Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$
-
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
$$
\begin{align}
R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}
\end{align}
$$
$R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
$$
\begin{align}
\eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\
\\
\eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\
\\
\eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}}
\end{align}
$$
$\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$
\begin{align}
\omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\\
\omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\
\end{align}
$$
$\omega^2$ is a better estimate of the explained variance in the population than
$\eta^2$. Only for balanced designs (equal sample sizes).
Proportion variance explained $\eta^2_{partial}$:
$$
\begin{align}
\eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\
\\
\eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}}
\end{align}
$$
n.a.
n.a.
n.a.
Visual representation
Visual representation
n.a.
n.a.
n.a.
-
-
-
-
-
-
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
n.a.
ANOVA table
-
-
-
-
-
-
-
n.a.
n.a.
Equivalent to
Equivalent to
n.a.
Equivalent to
n.a.
Equivalent to
-
-
Friedman test, with a categorical dependent variable consisting of two independent groups.
OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.
Example context
Example context
Example context
Example context
Example context
Example context
Example context
Example context
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?
Do people from different religions tend to score differently on social economic status?
Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?
Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$?
Is the average mental health score of office workers different from $\mu_0 = 50$?
Do people tend to score higher on mental health after a mindfulness course?
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?
Put your categorical variable in the box below Test Variable List
Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
Continue and click OK
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
Under Test Type, select Cochran's Q test
Analyze > Compare Means > Paired-Samples T Test...
Put the two paired variables in the boxes below Variable 1 and Variable 2
Analyze > Compare Means > One-Sample T Test...
Put your variable in the box below Test Variable(s)
Fill in the value for $\mu_0$ in the box next to Test Value
Put the two paired variables in the boxes below Variable 1 and Variable 2
Under Test Type, select the Sign test
Analyze > Regression > Binary Logistic...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
Put your categorical variable in the box below Variable
Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
ANOVA > One Way ANOVA - Kruskal-Wallis
Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
T-Tests > Paired Samples T-Test
Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
Put your variable in the box below Dependent Variables
Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Put the two paired variables in the box below Measures
Regression > 2 Outcomes - Binomial
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors