This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
$\pi = \pi_0$
$\pi$ is the population proportion of "successes"; $\pi_0$ is the population proportion of successes according to the null hypothesis
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
$m = 0$
$m$ is the unknown population median of the difference scores
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
ANOVA $F$ test:
$\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the unknown mean in population 1; $\mu_2$ is the unknown mean in population 2; $\mu_I$ is the unknown mean in population $I$
$t$ Test for contrast:
$\Psi = 0$
$\Psi$ is a contrast in the population, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the unknown mean in population $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
$\mu_g = \mu_h$
$\mu_g$ is the unknown mean in population $g$; $\mu_h$ is the unknown mean in population $h$
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
$F$ test for the complete regression model:
not all population regression coefficients are 0 or equivalenty
The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
Two sided: $\beta_k \neq 0$
Right sided: $\beta_k > 0$
Left sided: $\beta_k < 0$
Two sided: $\pi \neq \pi_0$
Right sided: $\pi > \pi_0$
Left sided: $\pi < \pi_0$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Two sided: $m \neq 0$
Right sided: $m > 0$
Left sided: $m < 0$
ANOVA $F$ test:
Not all population means are equal
$t$ Test for contrast:
Two sided: $\Psi \neq 0$
Right sided: $\Psi > 0$
Left sided: $\Psi < 0$
$t$ Test multiple comparisons:
Usually two sided: $\mu_g \neq \mu_h$
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
In the population, the residuals are normally distributed at each combination of values of the independent variables
In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Sample is a simple random sample from the population. That is, observations are independent of one another
Difference scores are normally distributed in the population
Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Population of difference scores can be conceived of as the difference scores we would find if we would apply our study (e.g., applying an intervention and measuring prepost scores) to all individuals in the population.
The population distribution of the difference scores is symmetric
Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Note: sometimes it considered sufficient for the data to be measured at an ordinal scale, rather than an interval or ratio scale. However, since the test statistic is based on ranked difference scores, we need to know whether a change in scores from, say, 6 to 7 is larger than/smaller than/equal to a change from 5 to 6. This is impossible to know for ordinal scales, since for these scales the size of the difference between values is meaningless.
Within each population, the scores on the dependent variable are normally distributed
The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
$F$ test for the complete regression model:
$
\begin{aligned}[t]
F &= \dfrac{\sum (\hat{y}_j  \bar{y})^2 / K}{\sum (y_j  \hat{y}_j)^2 / (N  K  1)}\\
&= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\
&= \dfrac{\mbox{mean square model}}{\mbox{mean square error}}
\end{aligned}
$
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
$t = \dfrac{b_k}{SE_{b_k}}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j  \hat{y}_j)^2 / (N  2)}}{\sqrt{\sum (x_j  \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j  \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$X$ = number of successes in the sample
$t = \dfrac{\bar{y}  \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores,
$N$ is the sample size (number of difference scores).
Two different types of test statistics can be used; both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score}_2  \mbox{score}_1)$. The sign is 1 if the difference is larger than zero, 1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
For each subject, compute the absolute value of the difference score $\mbox{score}_2  \mbox{score}_1$.
Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:
$W_1 = \sum\, R_d^{+}$
or
$W_1 = \sum\, R_d^{}$
That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{}$. So if you are using such a table, pick the smaller one.
If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{}$, the right and left sided alternative hypotheses 'flip').
$W_2 = \sum\, \mbox{sign}_d \times R_d$
That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
ANOVA $F$ test:
$\begin{aligned}[t]
F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean}  \mbox{overall mean})^2 / (I  1)}{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2 / (N  I)}\\
&= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\
&= \dfrac{\mbox{mean square between}}{\mbox{mean square error}}
\end{aligned}
$
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model; mean square error is also known as mean square residual or mean square within
$t$ Test for contrast:
$t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
$t = \dfrac{\bar{y}_g  \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$,
$s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA,
$n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
Sample standard deviation of the residuals $s$
n.a.
n.a.
n.a.
Pooled standard deviation
$\begin{aligned}
s &= \sqrt{\dfrac{\sum (y_j  \hat{y}_j)^2}{N  K  1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}}
\end{aligned}
$



$
\begin{aligned}
s_p &= \sqrt{\dfrac{(n_1  1) \times s^2_1 + (n_2  1) \times s^2_2 + \ldots + (n_I  1) \times s^2_I}{N  I}}\\
&= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2}{N  I}}\\
&= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\
&= \sqrt{\mbox{mean square error}}
\end{aligned}
$
where $s^2_i$ is the variance in group $i$
$F$ distribution with $K$ (df model, numerator) and $N  K  1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
$t$ distribution with $N  K  1$ (df error) degrees of freedom
Binomial($n$, $p$) distribution
Here $n = N$ (total sample size), and $p = \pi_0$ (population proportion according to the null hypothesis)
$t$ distribution with $N  1$ degrees of freedom
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here
$$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$
$$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$
Hence, if $N_r$ is large, the standardized test statistic
$$z = \frac{W_1  \mu_{W_1}}{\sigma_{W_1}}$$
follows approximately a standard normal distribution if the null hypothesis were true.
Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here
$$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$
Hence, if $N_r$ is large, the standardized test statistic
$$z = \frac{W_2}{\sigma_{W_2}}$$
follows approximately a standard normal distribution if the null hypothesis were true.
If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.
Note: the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated if ties are present in the data.
Sampling distribution of $F$:
$F$ distribution with $I  1$ (df between, numerator) and $N  I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
$t$ distribution with $N  I$ degrees of freedom
Significant?
Significant?
Significant?
Significant?
Significant?
$F$ test:
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $X$ observed in sample is in the rejection region or
Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $X$ observed in sample is in the rejection region or
Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $X$ observed in sample is in the rejection region or
Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
For large samples, the table for standard normal probabilities can be used:
Two sided:
Check if $z$ observed in sample is at least as extreme as critical value $z^*$ or
Find two sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $z$ observed in sample is equal to or larger than critical value $z^*$ or
Find right sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $z$ observed in sample is equal to or smaller than critical value $z^*$ or
Find left sided $p$ value corresponding to observed $z$ and check if it is equal to or smaller than $\alpha$
$F$ test:
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)
$t$ Test for contrast two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test for contrast right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test for contrast left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test multiple comparisons two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$
n.a.
$C\%$ confidence interval for $\mu$
n.a.
$C\%$ confidence interval for $\Psi$, for $\mu_g  \mu_h$, and for $\mu_i$
Confidence interval for $\beta_k$:
$b_k \pm t^* \times SE_{b_k}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j  \hat{y}_j)^2 / (N  2)}}{\sqrt{\sum (x_j  \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j  \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
$\hat{y} \pm t^* \times SE_{\hat{y}}$
If only one independent variable:
$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^*  \bar{x})^2}{\sum (x_j  \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
$\hat{y} \pm t^* \times SE_{y_{new}}$
If only one independent variable:
$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^*  \bar{x})^2}{\sum (x_j  \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N  K  1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
$c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
where the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g  \mu_h$ (multiple comparisons):
$(\bar{y}_g  \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
where $t^{**}$ depends upon $C$, degrees of freedom ($N  I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* = $ the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
$\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
where $\bar{y}_i$ is the sample mean for group $i$, $n_i$ is the sample size for group $i$, and the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Effect size
n.a.
Effect size
n.a.
Effect size
Complete model:
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
$$
\begin{align}
R^2 &= \dfrac{\sum (\hat{y}_j  \bar{y})^2}{\sum (y_j  \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\
&= 1  \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\
&= r(y, \hat{y})^2
\end{align}
$$
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to
$$R^2_W = 1  \frac{N  1}{N  K  1}(1  R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$
Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to
$$R^2_S = 1  \frac{(N  1)(N  2)(N + 1)}{(N  K  1)(N  K  2)(N)}(1  R^2)$$
Per independent variable:
Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
Semipartial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$

Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$:
$$d = \frac{\bar{y}  \mu_0}{s}$$
Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$

Proportion variance explained $\eta^2$ and $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variable:
$$
\begin{align}
\eta^2 = R^2
&= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}}
\end{align}
$$
Only in one way ANOVA $\eta^2 = R^2$. $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
$$\omega^2 = \frac{\mbox{sum of squares between}  \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$
$\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$.
Cohen's $d$:
Standardized difference between the mean in group $g$ and in group $h$:
$$d_{g,h} = \frac{\bar{y}_g  \bar{y}_h}{s_p}$$
Indicates how many standard deviations $s_p$ two sample means are removed from each other
OLS regression with one, categorical independent variable transformed into $I  1$ code variables:
$F$ test ANOVA equivalent to $F$ test regression model
$t$ test for contrast $i$ equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
Example context
Example context
Example context
Example context
Example context
Can mental health be predicted from fysical health, economic class, and gender?
Is the proportion smokers amongst office workers different from $\pi_0 = .2$?
Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0?
Is the median of the differences between the mental health scores before and after an intervention different from 0?
Is the average mental health score different between people from a low, moderate, and high economic class?
SPSS
SPSS
SPSS
SPSS
SPSS
Analyze > Regression > Linear...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Put the two paired variables in the boxes below Variable 1 and Variable 2
Under Test Type, select the Wilcoxon test
Analyze > Compare Means > OneWay ANOVA...
Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Regression > Linear Regression
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Frequencies > 2 Outcomes  Binomial test
Put your dichotomous variable in the white box at the right
Fill in the value for $\pi_0$ in the box next to Test value
Under Hypothesis, select your alternative hypothesis
TTests > Paired Samples TTest
Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
Under Hypothesis, select your alternative hypothesis
TTests > Paired Samples TTest
Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
Under Tests, select Wilcoxon rank
Under Hypothesis, select your alternative hypothesis
ANOVA > ANOVA
Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors