Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS) | Binomial test for a single proportion | Paired sample $t$ test | $z$ test for a single proportion | One sample $t$ test for the mean | $z$ test for a single proportion | Wilcoxon signed-rank test | Two sample $z$ test |
|
---|---|---|---|---|---|---|---|---|
Independent variables | Independent variable | Independent variable | Independent variable | Independent variable | Independent variable | Independent variable | Independent/grouping variable | |
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | None | 2 paired groups | None | None | None | 2 paired groups | One categorical with 2 independent groups | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
$F$ test for the complete regression model:
| H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis. | H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
$F$ test for the complete regression model:
| H1 two sided: $\pi \neq \pi_0$ H1 right sided: $\pi > \pi_0$ H1 left sided: $\pi < \pi_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\pi \neq \pi_0$ H1 right sided: $\pi > \pi_0$ H1 left sided: $\pi < \pi_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\pi \neq \pi_0$ H1 right sided: $\pi > \pi_0$ H1 left sided: $\pi < \pi_0$ | H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$F$ test for the complete regression model:
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$ | $X$ = number of successes in the sample | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $z = \dfrac{p - \pi_0}{\sqrt{\dfrac{\pi_0(1 - \pi_0)}{N}}}$
Here $p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size, and $\pi_0$ is the population proportion of successes according to the null hypothesis. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $z = \dfrac{p - \pi_0}{\sqrt{\dfrac{\pi_0(1 - \pi_0)}{N}}}$
Here $p$ is the sample proportion of successes: $\dfrac{X}{N}$, $N$ is the sample size, and $\pi_0$ is the population proportion of successes according to the null hypothesis. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| $z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | |
Sample standard deviation of the residuals $s$ | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | |
$\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | - | - | - | - | - | - | - | |
Sampling distribution of $F$ and of $t$ if H0 were true | Sampling distribution of $X$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $z$ if H0 were true | |
Sampling distribution of $F$:
| Binomial($n$, $P$) distribution.
Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis). | $t$ distribution with $N - 1$ degrees of freedom | Approximately the standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | Approximately the standard normal distribution | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | Standard normal distribution | |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
$F$ test:
| Two sided:
| Two sided:
| Two sided:
| Two sided:
| Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| |
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$ | n.a. | $C\%$ confidence interval for $\mu$ | Approximate $C\%$ confidence interval for $\pi$ | $C\%$ confidence interval for $\mu$ | Approximate $C\%$ confidence interval for $\pi$ | n.a. | $C\%$ confidence interval for $\mu_1 - \mu_2$ | |
Confidence interval for $\beta_k$:
| - | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | Regular (large sample):
| $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | Regular (large sample):
| - | $(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | |
Effect size | n.a. | Effect size | n.a. | Effect size | n.a. | n.a. | n.a. | |
Complete model:
| - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | - | - | - | |
Visual representation | n.a. | Visual representation | n.a. | Visual representation | n.a. | n.a. | Visual representation | |
Regression equations with: | - | - | - | - | ||||
ANOVA table | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | |
- | - | - | - | - | - | - | ||
n.a. | n.a. | Equivalent to | Equivalent to | n.a. | Equivalent to | n.a. | n.a. | |
- | - |
|
| - |
| - | - | |
Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | |
Can mental health be predicted from fysical health, economic class, and gender? | Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? Use the normal approximation for the sampling distribution of the test statistic. | Is the average mental health score of office workers different from $\mu_0 = 50$? | Is the proportion of smokers amongst office workers different from $\pi_0 = 0.2$? Use the normal approximation for the sampling distribution of the test statistic. | Is the median of the differences between the mental health scores before and after an intervention different from 0? | Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women. | |
SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | SPSS | n.a. | |
Analyze > Regression > Linear...
| Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
| Analyze > Compare Means > One-Sample T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > Binomial...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| - | |
Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | Jamovi | n.a. | |
Regression > Linear Regression
| Frequencies > 2 Outcomes - Binomial test
| T-Tests > Paired Samples T-Test
| Frequencies > 2 Outcomes - Binomial test
| T-Tests > One Sample T-Test
| Frequencies > 2 Outcomes - Binomial test
| T-Tests > Paired Samples T-Test
| - | |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |