Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS) | $z$ test for the difference between two proportions | Paired sample $t$ test | Sign test | Two sample $z$ test | Paired sample $t$ test | Marginal Homogeneity test / Stuart-Maxwell test | McNemar's test | One sample $z$ test for the mean | Two way ANOVA |
|
---|---|---|---|---|---|---|---|---|---|---|
Independent variables | Independent/grouping variable | Independent variable | Independent variable | Independent/grouping variable | Independent variable | Independent variable | Independent variable | Independent variable | Independent/grouping variables | |
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One categorical with 2 independent groups | 2 paired groups | 2 paired groups | One categorical with 2 independent groups | 2 paired groups | 2 paired groups | 2 paired groups | None | Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$) | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One of ordinal level | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One categorical with $J$ independent groups ($J \geqslant 2$) | One categorical with 2 independent groups | One quantitative of interval or ratio level | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
$F$ test for the complete regression model:
| H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. |
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ | Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:
Other formulations of the null hypothesis are:
| H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | ANOVA $F$ tests:
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
$F$ test for the complete regression model:
| H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ |
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. | The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0. Other formulations of the alternative hypothesis are:
| H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | ANOVA $F$ tests:
| |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$F$ test for the complete regression model:
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$ | $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $W = $ number of difference scores that is larger than 0 | $z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | $X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0. | $z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size. The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$. | For main and interaction effects together (model):
| |
Sample standard deviation of the residuals $s$ | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | Pooled standard deviation | |
$\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | - | - | - | - | - | - | - | - | $ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | |
Sampling distribution of $F$ and of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of the test statistic if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $F$ if H0 were true | |
Sampling distribution of $F$:
| Approximately the standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | Approximately the chi-squared distribution with $J - 1$ degrees of freedom | If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom. If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$. | Standard normal distribution | For main and interaction effects together (model):
| |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
$F$ test:
| Two sided:
| Two sided:
| If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| Two sided:
| If we denote the test statistic as $X^2$:
| For test statistic $X^2$:
| Two sided:
|
| |
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$ | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | $C\%$ confidence interval for $\mu$ | n.a. | $C\%$ confidence interval for $\mu_1 - \mu_2$ | $C\%$ confidence interval for $\mu$ | n.a. | n.a. | $C\%$ confidence interval for $\mu$ | n.a. | |
Confidence interval for $\beta_k$:
| Regular (large sample):
| $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | $(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | - | $\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu$ can also be used as significance test. | - | |
Effect size | n.a. | Effect size | n.a. | n.a. | Effect size | n.a. | n.a. | Effect size | Effect size | |
Complete model:
| - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - | - | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$ |
| |
Visual representation | n.a. | Visual representation | n.a. | Visual representation | Visual representation | n.a. | n.a. | Visual representation | n.a. | |
Regression equations with: | - | - | - | - | - | |||||
ANOVA table | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | ANOVA table | |
- | - | - | - | - | - | - | - | |||
n.a. | Equivalent to | Equivalent to | Equivalent to | n.a. | Equivalent to | n.a. | Equivalent to | n.a. | Equivalent to | |
- | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. |
|
Two sided sign test is equivalent to
| - |
| - |
| - | OLS regression with two categorical independent variables and the interaction term, transformed into $(I - 1)$ + $(J - 1)$ + $(I - 1) \times (J - 1)$ code variables. | |
Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | |
Can mental health be predicted from fysical health, economic class, and gender? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Do people tend to score higher on mental health after a mindfulness course? | Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women. | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders? | Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$ | Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender? | |
SPSS | SPSS | SPSS | SPSS | n.a. | SPSS | SPSS | SPSS | n.a. | SPSS | |
Analyze > Regression > Linear...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| - | Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| - | Analyze > General Linear Model > Univariate...
| |
Jamovi | Jamovi | Jamovi | Jamovi | n.a. | Jamovi | n.a. | Jamovi | n.a. | Jamovi | |
Regression > Linear Regression
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| T-Tests > Paired Samples T-Test
| Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| - | T-Tests > Paired Samples T-Test
| - | Frequencies > Paired Samples - McNemar test
| - | ANOVA > ANOVA
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |