Regression (OLS) - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Regression (OLS)
Chi-squared test for the relationship between two categorical variables
Regression (OLS)
Mann-Whitney-Wilcoxon test
Independent variablesIndependent /column variableIndependent variablesIndependent variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne categorical with $I$ independent groups ($I \geqslant 2$)One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne categorical with 2 independent groups
Dependent variableDependent /row variableDependent variableDependent variable
One quantitative of interval or ratio levelOne categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne of ordinal level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
$F$ test for the complete regression model:
• $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• The variance explained by all the independent variables together (the complete model) is 0 in the population: $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$
• There is no association between the row and column variable
More precise statement:
• If there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
The distribution of the dependent variable is the same in each of the $I$ populations
• If there is one random sample of size $N$ from the total population:
The row and column variables are independent
$F$ test for the complete regression model:
• $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• The variance explained by all the independent variables together (the complete model) is 0 in the population: $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
• The median in population 1 is equal to the median in population 2
Else:
Formulation 1:
• The scores in population 1 are not systematically higher or lower than the scores in population 2
Formulation 2:
• P(an observation from population 1 exceeds an observation from population 2) = P(an observation from population 2 exceeds observation from population 1)
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
$F$ test for the complete regression model:
• not all population regression coefficients are 0
or equivalenty
• The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
• Two sided: $\beta_k \neq 0$
• Right sided: $\beta_k > 0$
• Left sided: $\beta_k < 0$
• There is an association between the row and column variable
More precise statement:
• If there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
The distribution of the dependent variable is not the same in all of the $I$ populations
• If there is one random sample of size $N$ from the total population:
The row and column variables are dependent
$F$ test for the complete regression model:
• not all population regression coefficients are 0
or equivalenty
• The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
• Two sided: $\beta_k \neq 0$
• Right sided: $\beta_k > 0$
• Left sided: $\beta_k < 0$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
• Two sided: the median in population 1 is not equal to the median in population 2
• Right sided: the median in population 1 is larger than the median in population 2
• Left sided: the median in population 1 is smaller than the median in population 2
Else:
Formulation 1:
• Two sided: The scores in population 1 are systematically higher or lower than the scores in population 2
• Right sided: The scores in population 1 are systematically higher than the scores in population 2
• Left sided: The scores in population 1 are systematically lower than the scores in population 2
Formulation 2:
• Two sided: P(an observation from population 1 exceeds an observation from population 2) $\neq$ P(an observation from population 2 exceeds an observation from population 1)
• Right sided: P(an observation from population 1 exceeds an observation from population 2) > P(an observation from population 2 exceeds an observation from population 1)
• Left sided: P(an observation from population 1 exceeds an observation from population 2) < P(an observation from population 2 exceeds an observation from population 1)
AssumptionsAssumptionsAssumptionsAssumptions
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
• Sample size is large enough for $X^2$ to be approximately chi-squared distributed under the null hypothesis. Rule of thumb:
• 2 $\times$ 2 table: all four expected cell counts are 5 or more
• Larger than 2 $\times$ 2 tables: average of the expected cell counts is 5 or more, smallest expected cell count is 1 or more
• There are $I$ independent simple random samples from each of $I$ populations defined by the independent variable, or there is one simple random sample from the total population
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statistic
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
where for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
Two different types of test statistics can be used; both will result in the same test outcome. The first is the Wilcoxon rank sum statistic $W$:
The second type of test statistic is the Mann-Whitney $U$ statistic:
• $U = W - \dfrac{n_1(n_1 + 1)}{2}$
where $n_1$ is the sample size of group 1

Note: we could just as well base W and U on group 2. This would only 'flip' the right and left sided alternative hypotheses. Also, tables with critical values for $U$ are often based on the smaller of $U$ for group 1 and for group 2.
Sample standard deviation of the residuals $s$n.a.Sample standard deviation of the residuals $s$n.a.
\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}-\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}-
Sampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $W$ and of $U$ if H0 were true
Sampling distribution of $F$:
• $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - K - 1$ (df error) degrees of freedom
Approximately a chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedomSampling distribution of $F$:
• $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - K - 1$ (df error) degrees of freedom

Sampling distribution of $W$:
For large samples, $W$ is approximately normally distributed with mean $\mu_W$ and standard deviation $\sigma_W$ if the null hypothesis were true. Here \begin{aligned} \mu_W &= \dfrac{n_1(n_1 + n_2 + 1)}{2}\\ \sigma_W &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} Hence, for large samples, the standardized test statistic $$z_W = \dfrac{W - \mu_W}{\sigma_W}\\$$ follows approximately a standard normal distribution if the null hypothesis were true. Note that if your $W$ value is based on group 2, $\mu_W$ becomes $\frac{n_2(n_1 + n_2 + 1)}{2}$.

Sampling distribution of $U$:
For large samples, $U$ is approximately normally distributed with mean $\mu_U$ and standard deviation $\sigma_U$ if the null hypothesis were true. Here \begin{aligned} \mu_U &= \dfrac{n_1 n_2}{2}\\ \sigma_U &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} Hence, for large samples, the standardized test statistic $$z_U = \dfrac{U - \mu_U}{\sigma_U}\\$$ follows approximately a standard normal distribution if the null hypothesis were true.

For small samples, the exact distribution of $W$ or $U$ should be used.

Note: the formula for the standard deviations $\sigma_W$ and $\sigma_U$ is more complicated if ties are present in the data.
Significant?Significant?Significant?Significant?
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
For large samples, the table for standard normal probabilities can be used:
Two sided:
Right sided:
Left sided:
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$n.a.$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$n.a.
Confidence interval for $\beta_k$:
• $b_k \pm t^* \times SE_{b_k}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
• $\hat{y} \pm t^* \times SE_{\hat{y}}$
• If only one independent variable:
$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
• $\hat{y} \pm t^* \times SE_{y_{new}}$
• If only one independent variable:
$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
-Confidence interval for $\beta_k$:
• $b_k \pm t^* \times SE_{b_k}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
• $\hat{y} \pm t^* \times SE_{\hat{y}}$
• If only one independent variable:
$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
• $\hat{y} \pm t^* \times SE_{y_{new}}$
• If only one independent variable:
$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
-
Effect sizen.a.Effect sizen.a.
Complete model:
• Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
\begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align}
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
• Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to $$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$
• Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to $$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$
Per independent variable:
• Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
• Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
• Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
-Complete model:
• Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
\begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align}
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
• Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to $$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$
• Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to $$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$
Per independent variable:
• Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
• Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
• Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
-
ANOVA tablen.a.ANOVA tablen.a.
- -
n.a.n.a.n.a.Equivalent to
---If no ties in the data: two sided Mann-Whitney-Wilcoxon test is equivalent to Kruskal-Wallis test with an independent variable with 2 levels ($I = 2$)
Example contextExample contextExample contextExample context
Can mental health be predicted from fysical health, economic class, and gender?Is there an association between economic class and gender? Is the distribution of economic class different between men and women?Can mental health be predicted from fysical health, economic class, and gender?Do men tend to score higher on social economic status than women?
SPSSSPSSSPSSSPSS
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Descriptive Statistics > Crosstabs...
• Put one of your two categorical variables in the box below Row(s), and the other categorical variable in the box below Column(s)
• Click the Statistics... button, and click on the square in front of Chi-square
• Continue and click OK
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples...
• Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
• Continue and click OK
JamoviJamoviJamoviJamovi
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Frequencies > Independent Samples - $\chi^2$ test of association
• Put one of your two categorical variables in the box below Rows, and the other categorical variable in the box below Columns
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
T-Tests > Independent Samples T-Test
• Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
• Under Tests, select Mann-Whitney U
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questions