# Regression (OLS) - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Regression (OLS)
Goodness of fit test
One sample $z$ test for the mean
Kruskal-Wallis test
$z$ test for the difference between two proportions
Two way ANOVA
Kruskal-Wallis test
Friedman test
Spearman's rho
Independent variablesIndependent variableIndependent variableIndependent/grouping variableIndependent/grouping variableIndependent/grouping variablesIndependent/grouping variableIndependent/grouping variableVariable 1
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesNoneNoneOne categorical with $I$ independent groups ($I \geqslant 2$)One categorical with 2 independent groupsTwo categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)One categorical with $I$ independent groups ($I \geqslant 2$)One within subject factor ($\geq 2$ related groups)One of ordinal level
Dependent variableDependent variableDependent variableDependent variableDependent variableDependent variableDependent variableDependent variableVariable 2
One quantitative of interval or ratio levelOne categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne of ordinal levelOne categorical with 2 independent groupsOne quantitative of interval or ratio levelOne of ordinal levelOne of ordinal levelOne of ordinal level
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
$F$ test for the complete regression model:
• H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• H0: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• H0: $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $x_i$ represents independent variable $i$, $\beta_i$ is the regression weight for independent variable $x_i$, and $\mu_y$ represents the population mean of the dependent variable $y$ given the scores on the independent variables.
• H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
• H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$, the probability of drawing an observation from condition $J$ is $\pi_J$
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
• H0: P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
ANOVA $F$ tests:
• H0 for main and interaction effects together (model): no main effects and interaction effect
• H0 for independent variable A: no main effect for A
• H0 for independent variable B: no main effect for B
• H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H0: the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H0: the population scores in any of the $I$ groups are not systematically higher or lower than the population scores in any of the other groups
Formulation 2:
• H0: P(an observation from population $g$ exceeds an observation from population $h$) = P(an observation from population $h$ exceeds an observation from population $g$), for each pair of groups.
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups

Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: $\rho_s = 0$

Here $\rho_s$ is the Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level.

In words, the null hypothesis would be:

H0: there is no monotonic relationship between the two variables in the population.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
$F$ test for the complete regression model:
• H1: not all population regression coefficients are 0
or equivalenty
• H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$
$t$ test for individual regression coefficient $\beta_k$:
• H1 two sided: $\beta_k \neq 0$
• H1 right sided: $\beta_k > 0$
• H1 left sided: $\beta_k < 0$
• H1: the population proportions are not all as specified under the null hypothesis
or equivalently
• H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H1: the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
• H1: for at least one pair of groups:
P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
ANOVA $F$ tests:
• H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
• H1 for independent variable A: there is a main effect for A
• H1 for independent variable B: there is a main effect for B
• H1 for the interaction term: there is an interaction effect between A and B
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
• H1: not all of the population medians for the $I$ groups are equal
Else:
Formulation 1:
• H1: the poplation scores in some groups are systematically higher or lower than the population scores in other groups
Formulation 2:
• H1: for at least one pair of groups:
P(an observation from population $g$ exceeds an observation from population $h$) $\neq$ P(an observation from population $h$ exceeds an observation from population $g$)
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups H1 two sided: $\rho_s \neq 0$
H1 right sided: $\rho_s > 0$
H1 left sided: $\rho_s < 0$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptions
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
Often ignored additional assumption:
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
• Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Scores are normally distributed in the population
• Population standard deviation $\sigma$ is known
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
• Significance test: number of successes and number of failures are each 5 or more in both sample groups
• Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
• Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
• For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
• Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.
Test statisticTest statisticTest statisticTest statisticTest statisticTest statisticTest statisticTest statisticTest statistic
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables.
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ is more complicated.
Note 1: mean square model is also known as mean square regression, and mean square error is also known as mean square residual.
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells.
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.

$H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$

Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.

Note: if ties are present in the data, the formula for $H$ is more complicated.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
For main and interaction effects together (model):
• $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
• $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
• $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
• $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.

$H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$

Here $N$ is the total sample size, $R_i$ is the sum of ranks in group $i$, and $n_i$ is the sample size of group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N (N + 1)} \times \sum \frac{R^2_i}{n_i}$ and then subtract $3(N + 1)$.

Note: if ties are present in the data, the formula for $H$ is more complicated.
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$

Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.

Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.

Note: if ties are present in the data, the formula for $Q$ is more complicated.
$t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}}$
Here $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores.
Sample standard deviation of the residuals $s$n.a.n.a.n.a.n.a.Pooled standard deviationn.a.n.a.n.a.
\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}----\begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} ---
Sampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $H$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $F$ if H0 were trueSampling distribution of $H$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $t$ if H0 were true
Sampling distribution of $F$:
• $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - K - 1$ (df error) degrees of freedom
Approximately the chi-squared distribution with $J - 1$ degrees of freedomStandard normal distribution

For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.

For small samples, the exact distribution of $H$ should be used.

Approximately the standard normal distributionFor main and interaction effects together (model):
• $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
• $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
• $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
• $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.

For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom.

For small samples, the exact distribution of $H$ should be used.

If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.

For small samples, the exact distribution of $Q$ should be used.
Approximately the $t$ distribution with $N - 2$ degrees of freedom
Significant?Significant?Significant?Significant?Significant?Significant?Significant?Significant?Significant?
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Right sided:
Left sided:
For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Right sided:
Left sided:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Right sided:
Left sided:
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$n.a.$C\%$ confidence interval for $\mu$n.a.Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$n.a.n.a.n.a.n.a.
Confidence interval for $\beta_k$:
• $b_k \pm t^* \times SE_{b_k}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
• $\hat{y} \pm t^* \times SE_{\hat{y}}$
• If only one independent variable:
$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
• $\hat{y} \pm t^* \times SE_{y_{new}}$
• If only one independent variable:
$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
-$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
-Regular (large sample):
• $(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
• $(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
----
Effect sizen.a.Effect sizen.a.n.a.Effect sizen.a.n.a.n.a.
Complete model:
• Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
\begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align}
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
• Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to $$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2.$
• Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to $$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$
Per independent variable:
• Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
• Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
• Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
-Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
--
• Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
\begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

• Proportion variance explained $\eta^2$:
Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
\begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

• Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to:
\begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

• Proportion variance explained $\eta^2_{partial}$: \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align}
---
Visual representationn.a.Visual representationn.a.n.a.n.a.n.a.n.a.n.a.
Regression equations with: -------
ANOVA tablen.a.n.a.n.a.n.a.ANOVA tablen.a.n.a.n.a.
---- ---
n.a.n.a.n.a.n.a.Equivalent toEquivalent ton.a.n.a.n.a.
----When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels.OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.---
Example contextExample contextExample contextExample contextExample contextExample contextExample contextExample contextExample context
Can mental health be predicted from fysical health, economic class, and gender?Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low} = 0.2,$ $\pi_{moderate} = 0.6,$ and $\pi_{high} = 0.2$?Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Do people from different religions tend to score differently on social economic status? Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?Do people from different religions tend to score differently on social economic status? Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?Is there a monotonic relationship between physical health and mental health?
SPSSSPSSn.a.SPSSSPSSSPSSSPSSSPSSSPSS
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Nonparametric Tests > Legacy Dialogs > Chi-square...
• Put your categorical variable in the box below Test Variable List
• Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
-Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
• Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
• Continue and click OK
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Analyze > Descriptive Statistics > Crosstabs...
• Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
• Click the Statistics... button, and click on the square in front of Chi-square
• Continue and click OK
Analyze > General Linear Model > Univariate...
• Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
• Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Range... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the smallest value you have used to indicate your groups in the box next to Minimum, and the largest value you have used to indicate your groups in the box next to Maximum
• Continue and click OK
Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
• Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables
• Under Test Type, select the Friedman test
Analyze > Correlate > Bivariate...
• Put your two variables in the box below Variables
• Under Correlation Coefficients, select Spearman
JamoviJamovin.a.JamoviJamoviJamoviJamoviJamoviJamovi
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Frequencies > N Outcomes - $\chi^2$ Goodness of fit
• Put your categorical variable in the box below Variable
• Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
-ANOVA > One Way ANOVA - Kruskal-Wallis
• Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Frequencies > Independent Samples - $\chi^2$ test of association
• Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
ANOVA > ANOVA
• Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
ANOVA > One Way ANOVA - Kruskal-Wallis
• Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
ANOVA > Repeated Measures ANOVA - Friedman
• Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures
Regression > Correlation Matrix
• Put your two variables in the white box at the right
• Under Correlation Coefficients, select Spearman
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questions