# Regression (OLS) - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Regression (OLS)
Regression (OLS)
Paired sample $t$ test
Independent variablesIndependent variablesIndependent variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables2 paired groups
Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesis
$F$ test for the complete regression model:
• $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• The variance explained by all the independent variables together (the complete model) is 0 in the population: $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$
$F$ test for the complete regression model:
• $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• The variance explained by all the independent variables together (the complete model) is 0 in the population: $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
Alternative hypothesisAlternative hypothesisAlternative hypothesis
$F$ test for the complete regression model:
• not all population regression coefficients are 0
or equivalenty
• The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
• Two sided: $\beta_k \neq 0$
• Right sided: $\beta_k > 0$
• Left sided: $\beta_k < 0$
$F$ test for the complete regression model:
• not all population regression coefficients are 0
or equivalenty
• The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
• Two sided: $\beta_k \neq 0$
• Right sided: $\beta_k > 0$
• Left sided: $\beta_k < 0$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
AssumptionsAssumptionsAssumptions
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
Often ignored additional assumption:
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
Often ignored additional assumption:
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
• Difference scores are normally distributed in the population
• Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Population of difference scores can be conceived of as the difference scores we would find if we would apply our study (e.g., applying an intervention and measuring pre-post scores) to all individuals in the population.
Test statisticTest statisticTest statistic
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores, $N$ is the sample size (number of difference scores).

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
Sample standard deviation of the residuals $s$Sample standard deviation of the residuals $s$n.a.
\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} \begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} - Sampling distribution of F and of t if H0 were trueSampling distribution of F and of t if H0 were trueSampling distribution of t if H0 were true Sampling distribution of F: • F distribution with K (df model, numerator) and N - K - 1 (df error, denominator) degrees of freedom Sampling distribution of t: • t distribution with N - K - 1 (df error) degrees of freedom Sampling distribution of F: • F distribution with K (df model, numerator) and N - K - 1 (df error, denominator) degrees of freedom Sampling distribution of t: • t distribution with N - K - 1 (df error) degrees of freedom t distribution with N - 1 degrees of freedom Significant?Significant?Significant? F test: • Check if F observed in sample is equal to or larger than critical value F^* or • Find p value corresponding to observed F and check if it is equal to or smaller than \alpha t Test two sided: t Test right sided: t Test left sided: F test: • Check if F observed in sample is equal to or larger than critical value F^* or • Find p value corresponding to observed F and check if it is equal to or smaller than \alpha t Test two sided: t Test right sided: t Test left sided: Two sided: Right sided: Left sided: C\% confidence interval for \beta_k and for \mu_y; C\% prediction interval for y_{new}C\% confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for y_{new}$$C\% confidence interval for \mu Confidence interval for \beta_k: • b_k \pm t^* \times SE_{b_k} • If only one independent variable: SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}} Confidence interval for \mu_y, the population mean of y given the values on the independent variables: • \hat{y} \pm t^* \times SE_{\hat{y}} • If only one independent variable: SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} Prediction interval for y_{new}, the score on y of a future respondent: • \hat{y} \pm t^* \times SE_{y_{new}} • If only one independent variable: SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} In all formulas, the critical value t^* is the value under the t_{N - K - 1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Confidence interval for \beta_k: • b_k \pm t^* \times SE_{b_k} • If only one independent variable: SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}} Confidence interval for \mu_y, the population mean of y given the values on the independent variables: • \hat{y} \pm t^* \times SE_{\hat{y}} • If only one independent variable: SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} Prediction interval for y_{new}, the score on y of a future respondent: • \hat{y} \pm t^* \times SE_{y_{new}} • If only one independent variable: SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} In all formulas, the critical value t^* is the value under the t_{N - K - 1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). \bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}} where the critical value t^* is the value under the t_{N-1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20) The confidence interval for \mu can also be used as significance test. Effect sizeEffect sizeEffect size Complete model: • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the sample regression equation (the independent variables):$$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$R^2 is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, \rho^2. If there is only one independent variable, R^2 = r^2: the correlation between the independent variable x and dependent variable y squared. • Wherry's R^2 / shrunken R^2: Corrects for the positive bias in R^2 and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$R^2_W is a less biased estimate than R^2 of the proportion variance explained in the population by the population regression equation, \rho^2 • Stein's R^2: Estimates the proportion of variance in y that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$Per independent variable: • Correlation squared r^2_k: the proportion of the total variance in the dependent variable y that is explained by the independent variable x_k, not corrected for the other independent variables in the model • Semi-partial correlation squared sr^2_k: the proportion of the total variance in the dependent variable y that is uniquely explained by the independent variable x_k, beyond the part that is already explained by the other independent variables in the model • Partial correlation squared pr^2_k: the proportion of the variance in the dependent variable y not explained by the other independent variables, that is uniquely explained by the independent variable x_k Complete model: • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the sample regression equation (the independent variables):$$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$R^2 is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, \rho^2. If there is only one independent variable, R^2 = r^2: the correlation between the independent variable x and dependent variable y squared. • Wherry's R^2 / shrunken R^2: Corrects for the positive bias in R^2 and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$R^2_W is a less biased estimate than R^2 of the proportion variance explained in the population by the population regression equation, \rho^2 • Stein's R^2: Estimates the proportion of variance in y that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$Per independent variable: • Correlation squared r^2_k: the proportion of the total variance in the dependent variable y that is explained by the independent variable x_k, not corrected for the other independent variables in the model • Semi-partial correlation squared sr^2_k: the proportion of the total variance in the dependent variable y that is uniquely explained by the independent variable x_k, beyond the part that is already explained by the other independent variables in the model • Partial correlation squared pr^2_k: the proportion of the variance in the dependent variable y not explained by the other independent variables, that is uniquely explained by the independent variable x_k Cohen's d: Standardized difference between the sample mean of the difference scores and \mu_0:$$d = \frac{\bar{y} - \mu_0}{s}$Indicates how many standard deviations$s$the sample mean of the difference scores$\bar{y}$is removed from$\mu_0$n.a.n.a.Visual representation -- ANOVA tableANOVA tablen.a. - n.a.n.a.Equivalent to --One sample$t$test on the difference scores Repeated measures ANOVA with one dichotomous within subjects factor Example contextExample contextExample context Can mental health be predicted from fysical health, economic class, and gender?Can mental health be predicted from fysical health, economic class, and gender?Is the average difference between the mental health scores before and after an intervention different from$\mu_0\$ = 0?
SPSSSPSSSPSS
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Regression > Linear...
• Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Compare Means > Paired-Samples T Test...
• Put the two paired variables in the boxes below Variable 1 and Variable 2
JamoviJamoviJamovi
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
T-Tests > Paired Samples T-Test
• Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questions