Regression (OLS) - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Regression (OLS)
Two way ANOVA
Pearson correlation
Two sample $z$ test
Independent variablesIndependent/grouping variablesVariable 1Independent/grouping variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesTwo categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)One quantitative of interval or ratio levelOne categorical with 2 independent groups
Dependent variableDependent variableVariable 2Dependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
$F$ test for the complete regression model:
  • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
    or equivalenty
  • H0: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
in the regression equation $ \mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\mu_y$ represents the population mean of the dependent variable $ y$ given the scores on the independent variables.
ANOVA $F$ tests:
  • H0 for main and interaction effects together (model): no main effects and interaction effect
  • H0 for independent variable A: no main effect for A
  • H0 for independent variable B: no main effect for B
  • H0 for the interaction term: no interaction effect between A and B
Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff.
H0: $\rho = \rho_0$

Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
$F$ test for the complete regression model:
  • H1: not all population regression coefficients are 0
    or equivalenty
  • H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$
$t$ test for individual regression coefficient $\beta_k$:
  • H1 two sided: $\beta_k \neq 0$
  • H1 right sided: $\beta_k > 0$
  • H1 left sided: $\beta_k < 0$
ANOVA $F$ tests:
  • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect
  • H1 for independent variable A: there is a main effect for A
  • H1 for independent variable B: there is a main effect for B
  • H1 for the interaction term: there is an interaction effect between A and B
H1 two sided: $\rho \neq \rho_0$
H1 right sided: $\rho > \rho_0$
H1 left sided: $\rho < \rho_0$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
AssumptionsAssumptionsAssumptions of test for correlationAssumptions
  • In the population, the residuals are normally distributed at each combination of values of the independent variables
  • In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
  • In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
  • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations
  • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another
  • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff)
  • In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
  • Within each population, the scores on the dependent variable are normally distributed
  • Population standard deviations $\sigma_1$ and $\sigma_2$ are known
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statisticTest statistic
$F$ test for the complete regression model:
  • $ \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned} $
    where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables.
$t$ test for individual $\beta_k$:
  • $t = \dfrac{b_k}{SE_{b_k}}$
    • If only one independent variable:
      $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
      with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ is more complicated.
Note 1: mean square model is also known as mean square regression, and mean square error is also known as mean square residual.
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$
For main and interaction effects together (model):
  • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$
For independent variable A:
  • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$
For independent variable B:
  • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$
For the interaction term:
  • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$
Note: mean square error is also known as mean square residual or mean square within.
Test statistic for testing H0: $\rho = 0$:
  • $t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}} $
    where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
  • $z = \dfrac{r_{Fisher} - \rho_{0_{Fisher}}}{\sqrt{\dfrac{1}{N - 3}}}$
    • $r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
    • $\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Sample standard deviation of the residuals $s$Pooled standard deviationn.a.n.a.
$\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ --
Sampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $F$ if H0 were trueSampling distribution of $t$ and of $z$ if H0 were trueSampling distribution of $z$ if H0 were true
Sampling distribution of $F$:
  • $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
  • $t$ distribution with $N - K - 1$ (df error) degrees of freedom
For main and interaction effects together (model):
  • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable A:
  • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For independent variable B:
  • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
For the interaction term:
  • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom
Here $N$ is the total sample size.
Sampling distribution of $t$:
  • $t$ distribution with $N - 2$ degrees of freedom
Sampling distribution of $z$:
  • Approximately the standard normal distribution
Standard normal distribution
Significant?Significant?Significant?Significant?
$F$ test:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided: $t$ Test right sided: $t$ Test left sided:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided: $t$ Test right sided: $t$ Test left sided: $z$ Test two sided: $z$ Test right sided: $z$ Test left sided: Two sided: Right sided: Left sided:
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$n.a.Approximate $C$% confidence interval for $\rho$$C\%$ confidence interval for $\mu_1 - \mu_2$
Confidence interval for $\beta_k$:
  • $b_k \pm t^* \times SE_{b_k}$
    • If only one independent variable:
      $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
  • $\hat{y} \pm t^* \times SE_{\hat{y}}$
    • If only one independent variable:
      $SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
  • $\hat{y} \pm t^* \times SE_{y_{new}}$
    • If only one independent variable:
      $SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
-First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
  • $lower_{Fisher} = r_{Fisher} - z^* \times \sqrt{\dfrac{1}{N - 3}}$
  • $upper_{Fisher} = r_{Fisher} + z^* \times \sqrt{\dfrac{1}{N - 3}}$
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get the approximate $C$% confidence interval for $\rho$:
  • lower bound = $\dfrac{e^{2 \times lower_{Fisher}} - 1}{e^{2 \times lower_{Fisher}} + 1}$
  • upper bound = $\dfrac{e^{2 \times upper_{Fisher}} - 1}{e^{2 \times upper_{Fisher}} + 1}$
$(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
Effect sizeEffect sizeProperties of the Pearson correlation coefficientn.a.
Complete model:
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
    $$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$
    $R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
  • Wherry's $R^2$ / shrunken $R^2$:
    Corrects for the positive bias in $R^2$ and is equal to $$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$
    $R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2.$
  • Stein's $R^2$:
    Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to $$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$
Per independent variable:
  • Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
  • Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
  • Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
  • Proportion variance explained $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variables and the interaction effect together:
    $$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$ $R^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\eta^2$:
    Proportion variance of the dependent variable $y$ explained by an independent variable or interaction effect:
    $$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$ $\eta^2$ is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to:
    $$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$. Only for balanced designs (equal sample sizes).

  • Proportion variance explained $\eta^2_{partial}$: $$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$
  • The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
  • The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
  • The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
  • The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable).
    For example:
    • the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y - 6$.
    • the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $-3x + 5$ and $2y - 6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $-3$.
  • The Pearson correlation coefficient does not say anything about causality.
  • The Pearson correlation coefficient is sensitive to outliers.
-
Visual representationn.a.n.a.Visual representation
Regression equations with: --
Two sample z test
ANOVA tableANOVA tablen.a.n.a.
ANOVA table regression analysis
two way ANOVA table
--
n.a.Equivalent toEquivalent ton.a.
-OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables.OLS regression with one independent variable:
  • $b_1 = r \times \frac{s_y}{s_x}$
  • Results significance test ($t$ and $p$ value) testing $H_0$: $\beta_1 = 0$ are equivalent to results significance test testing $H_0$: $\rho = 0$
-
Example contextExample contextExample contextExample context
Can mental health be predicted from fysical health, economic class, and gender?Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?Is there a linear relationship between physical health and mental health?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women.
SPSSSPSSSPSSn.a.
Analyze > Regression > Linear...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s)
Analyze > Correlate > Bivariate...
  • Put your two variables in the box below Variables
-
JamoviJamoviJamovin.a.
Regression > Linear Regression
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors
Regression > Correlation Matrix
  • Put your two variables in the white box at the right
  • Under Correlation Coefficients, select Pearson (selected by default)
  • Under Hypothesis, select your alternative hypothesis
-
Practice questionsPractice questionsPractice questionsPractice questions