Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS) | $z$ test for the difference between two proportions | Paired sample $t$ test | Sign test | Two sample $z$ test | Paired sample $t$ test | One sample $t$ test for the mean | Spearman's rho | Logistic regression |
|
---|---|---|---|---|---|---|---|---|---|
Independent variables | Independent/grouping variable | Independent variable | Independent variable | Independent/grouping variable | Independent variable | Independent variable | Variable 1 | Independent variables | |
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One categorical with 2 independent groups | 2 paired groups | 2 paired groups | One categorical with 2 independent groups | 2 paired groups | None | One of ordinal level | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | |
Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Dependent variable | Variable 2 | Dependent variable | |
One quantitative of interval or ratio level | One categorical with 2 independent groups | One quantitative of interval or ratio level | One of ordinal level | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One quantitative of interval or ratio level | One of ordinal level | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
$F$ test for the complete regression model:
| H0: $\pi_1 = \pi_2$
Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. |
| H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair. | H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | H0: $\rho_s = 0$
Here $\rho_s$ is the Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level. In words, the null hypothesis would be: H0: there is no monotonic relationship between the two variables in the population. | Model chi-squared test for the complete regression model:
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
$F$ test for the complete regression model:
| H1 two sided: $\pi_1 \neq \pi_2$ H1 right sided: $\pi_1 > \pi_2$ H1 left sided: $\pi_1 < \pi_2$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ |
| H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | H1 two sided: $\rho_s \neq 0$ H1 right sided: $\rho_s > 0$ H1 left sided: $\rho_s < 0$ | Model chi-squared test for the complete regression model:
| |
Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
|
|
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | Test statistic | |
$F$ test for the complete regression model:
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$ | $z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$ | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $W = $ number of difference scores that is larger than 0 | $z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores). The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$. | $t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}} $ Here $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores. | Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| |
Sample standard deviation of the residuals $s$ | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | |
$\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ | - | - | - | - | - | - | - | - | |
Sampling distribution of $F$ and of $t$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W$ if H0 were true | Sampling distribution of $z$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | |
Sampling distribution of $F$:
| Approximately the standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. | Standard normal distribution | $t$ distribution with $N - 1$ degrees of freedom | $t$ distribution with $N - 1$ degrees of freedom | Approximately the $t$ distribution with $N - 2$ degrees of freedom | Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| |
Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | Significant? | |
$F$ test:
| Two sided:
| Two sided:
| If $n$ is small, the table for the binomial distribution should be used: Two sided:
If $n$ is large, the table for standard normal probabilities can be used: Two sided:
| Two sided:
| Two sided:
| Two sided:
| Two sided:
| For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| |
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$, $C\%$ prediction interval for $y_{new}$ | Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ | $C\%$ confidence interval for $\mu$ | n.a. | $C\%$ confidence interval for $\mu_1 - \mu_2$ | $C\%$ confidence interval for $\mu$ | $C\%$ confidence interval for $\mu$ | n.a. | Wald-type approximate $C\%$ confidence interval for $\beta_k$ | |
Confidence interval for $\beta_k$:
| Regular (large sample):
| $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | $(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test. | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test. | - | $b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | |
Effect size | n.a. | Effect size | n.a. | n.a. | Effect size | Effect size | n.a. | Goodness of fit measure $R^2_L$ | |
Complete model:
| - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | - | - | Cohen's $d$: Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$ | Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | - | $R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | |
Visual representation | n.a. | Visual representation | n.a. | Visual representation | Visual representation | Visual representation | n.a. | n.a. | |
Regression equations with: | - | - | - | - | |||||
ANOVA table | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | n.a. | |
- | - | - | - | - | - | - | - | ||
n.a. | Equivalent to | Equivalent to | Equivalent to | n.a. | Equivalent to | n.a. | n.a. | n.a. | |
- | When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels. |
|
Two sided sign test is equivalent to
| - |
| - | - | - | |
Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | Example context | |
Can mental health be predicted from fysical health, economic class, and gender? | Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Do people tend to score higher on mental health after a mindfulness course? | Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women. | Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$? | Is the average mental health score of office workers different from $\mu_0 = 50$? | Is there a monotonic relationship between physical health and mental health? | Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | |
SPSS | SPSS | SPSS | SPSS | n.a. | SPSS | SPSS | SPSS | SPSS | |
Analyze > Regression > Linear...
| SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
| Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| - | Analyze > Compare Means > Paired-Samples T Test...
| Analyze > Compare Means > One-Sample T Test...
| Analyze > Correlate > Bivariate...
| Analyze > Regression > Binary Logistic...
| |
Jamovi | Jamovi | Jamovi | Jamovi | n.a. | Jamovi | Jamovi | Jamovi | Jamovi | |
Regression > Linear Regression
| Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
| T-Tests > Paired Samples T-Test
| Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
| - | T-Tests > Paired Samples T-Test
| T-Tests > One Sample T-Test
| Regression > Correlation Matrix
| Regression > 2 Outcomes - Binomial
| |
Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | Practice questions | |