Logistic regression - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Logistic regression | Spearman's rho | One sample Wilcoxon signed-rank test | Friedman test |
|
---|---|---|---|---|
Independent variables | Variable 1 | Independent variable | Independent/grouping variable | |
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | One of ordinal level | None | One within subject factor ($\geq 2$ related groups) | |
Dependent variable | Variable 2 | Dependent variable | Dependent variable | |
One categorical with 2 independent groups | One of ordinal level | One of ordinal level | One of ordinal level | |
Null hypothesis | Null hypothesis | Null hypothesis | Null hypothesis | |
Model chi-squared test for the complete regression model:
| H0: $\rho_s = 0$
Here $\rho_s$ is the Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level. In words, the null hypothesis would be: H0: there is no monotonic relationship between the two variables in the population. | H0: $m = m_0$
Here $m$ is the population median, and $m_0$ is the population median according to the null hypothesis. | H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher. | |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
Model chi-squared test for the complete regression model:
| H1 two sided: $\rho_s \neq 0$ H1 right sided: $\rho_s > 0$ H1 left sided: $\rho_s < 0$ | H1 two sided: $m \neq m_0$ H1 right sided: $m > m_0$ H1 left sided: $m < m_0$ | H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups | |
Assumptions | Assumptions | Assumptions | Assumptions | |
|
|
|
| |
Test statistic | Test statistic | Test statistic | Test statistic | |
Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| $t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}} $ Here $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores. | Two different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic.
In order to compute each of the test statistics, follow the steps below:
| $Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$. Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$. Note: if ties are present in the data, the formula for $Q$ is more complicated. | |
Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | Sampling distribution of $t$ if H0 were true | Sampling distribution of $W_1$ and of $W_2$ if H0 were true | Sampling distribution of $Q$ if H0 were true | |
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| Approximately the $t$ distribution with $N - 2$ degrees of freedom | Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Sampling distribution of $W_2$: If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true. If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used. Note: if ties are present in the data, the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated. | If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used. | |
Significant? | Significant? | Significant? | Significant? | |
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| Two sided:
| For large samples, the table for standard normal probabilities can be used: Two sided:
| If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
| |
Wald-type approximate $C\%$ confidence interval for $\beta_k$ | n.a. | n.a. | n.a. | |
$b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | - | - | - | |
Goodness of fit measure $R^2_L$ | n.a. | n.a. | n.a. | |
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | - | - | - | |
Example context | Example context | Example context | Example context | |
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | Is there a monotonic relationship between physical health and mental health? | Is the median mental health score of office workers different from $m_0 = 50$? | Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)? | |
SPSS | SPSS | SPSS | SPSS | |
Analyze > Regression > Binary Logistic...
| Analyze > Correlate > Bivariate...
| Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:
Analyze > Nonparametric Tests > One Sample...
| Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...
| |
Jamovi | Jamovi | Jamovi | Jamovi | |
Regression > 2 Outcomes - Binomial
| Regression > Correlation Matrix
| T-Tests > One Sample T-Test
| ANOVA > Repeated Measures ANOVA - Friedman
| |
Practice questions | Practice questions | Practice questions | Practice questions | |