Logistic regression - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Logistic regression | Marginal Homogeneity test / Stuart-Maxwell test | Wilcoxon signed-rank test |
You cannot compare more than 3 methods |
---|---|---|---|
Independent variables | Independent variable | Independent variable | |
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | 2 paired groups | 2 paired groups | |
Dependent variable | Dependent variable | Dependent variable | |
One categorical with 2 independent groups | One categorical with $J$ independent groups ($J \geqslant 2$) | One quantitative of interval or ratio level | |
Null hypothesis | Null hypothesis | Null hypothesis | |
Model chi-squared test for the complete regression model:
| H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ | H0: $m = 0$
Here $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
Model chi-squared test for the complete regression model:
| H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. | H1 two sided: $m \neq 0$ H1 right sided: $m > 0$ H1 left sided: $m < 0$ | |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| |
Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | Sampling distribution of the test statistic if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | |
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| Approximately the chi-squared distribution with $J - 1$ degrees of freedom | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | |
Significant? | Significant? | Significant? | |
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| If we denote the test statistic as $X^2$:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| |
Wald-type approximate $C\%$ confidence interval for $\beta_k$ | n.a. | n.a. | |
$b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | - | - | |
Goodness of fit measure $R^2_L$ | n.a. | n.a. | |
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | - | - | |
Example context | Example context | Example context | |
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Is the median of the differences between the mental health scores before and after an intervention different from 0? | |
SPSS | SPSS | SPSS | |
Analyze > Regression > Binary Logistic...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| |
Jamovi | n.a. | Jamovi | |
Regression > 2 Outcomes - Binomial
| - | T-Tests > Paired Samples T-Test
| |
Practice questions | Practice questions | Practice questions | |