Logistic regression - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Logistic regression
Spearman's rho
Mann-Whitney-Wilcoxon test
Independent variablesIndependent variableIndependent variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne of ordinal levelOne categorical with 2 independent groups
Dependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne of ordinal levelOne of ordinal level
Null hypothesisNull hypothesisNull hypothesis
Model chi-squared test for the complete regression model:
  • $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $
$\rho_s = 0$
$\rho_s$ is the unknown Spearman correlation in the population.

In words:
there is no monotonic relationship between the two variables in the population
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • The median in population 1 is equal to the median in population 2
Else:
Formulation 1:
  • The scores in population 1 are not systematically higher or lower than the scores in population 2
Formulation 2:
  • P(an observation from population 1 exceeds an observation from population 2) = P(an observation from population 2 exceeds observation from population 1)
Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Alternative hypothesisAlternative hypothesisAlternative hypothesis
Model chi-squared test for the complete regression model:
  • not all population regression coefficients are 0
Wald test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • right sided: $\beta_k > 0$
  • left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
Two sided: $\rho_s \neq 0$
Right sided: $\rho_s > 0$
Left sided: $\rho_s < 0$
If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in both populations:
  • Two sided: the median in population 1 is not equal to the median in population 2
  • Right sided: the median in population 1 is larger than the median in population 2
  • Left sided: the median in population 1 is smaller than the median in population 2
Else:
Formulation 1:
  • Two sided: The scores in population 1 are systematically higher or lower than the scores in population 2
  • Right sided: The scores in population 1 are systematically higher than the scores in population 2
  • Left sided: The scores in population 1 are systematically lower than the scores in population 2
Formulation 2:
  • Two sided: P(an observation from population 1 exceeds an observation from population 2) $\neq$ P(an observation from population 2 exceeds an observation from population 1)
  • Right sided: P(an observation from population 1 exceeds an observation from population 2) > P(an observation from population 2 exceeds an observation from population 1)
  • Left sided: P(an observation from population 1 exceeds an observation from population 2) < P(an observation from population 2 exceeds an observation from population 1)
AssumptionsAssumptionsAssumptions
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another

Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statistic
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
$t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}} $
where $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores.
Two different types of test statistics can be used; both will result in the same test outcome. The first is the Wilcoxon rank sum statistic $W$: The second type of test statistic is the Mann-Whitney $U$ statistic:
  • $U = W - \dfrac{n_1(n_1 + 1)}{2}$
where $n_1$ is the sample size of group 1

Note: we could just as well base W and U on group 2. This would only 'flip' the right and left sided alternative hypotheses. Also, tables with critical values for $U$ are often based on the smaller of $U$ for group 1 and for group 2.
Sampling distribution of $X^2$ and of the Wald statistic if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $W$ and of $U$ if H0 were true
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately a chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately a standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Approximately a $t$ distribution with $N - 2$ degrees of freedom

Sampling distribution of $W$:
For large samples, $W$ is approximately normally distributed with mean $\mu_W$ and standard deviation $\sigma_W$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_W &= \dfrac{n_1(n_1 + n_2 + 1)}{2}\\ \sigma_W &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_W = \dfrac{W - \mu_W}{\sigma_W}\\ $$ follows approximately a standard normal distribution if the null hypothesis were true. Note that if your $W$ value is based on group 2, $\mu_W$ becomes $\frac{n_2(n_1 + n_2 + 1)}{2}$.

Sampling distribution of $U$:
For large samples, $U$ is approximately normally distributed with mean $\mu_U$ and standard deviation $\sigma_U$ if the null hypothesis were true. Here $$ \begin{aligned} \mu_U &= \dfrac{n_1 n_2}{2}\\ \sigma_U &= \sqrt{\dfrac{n_1 n_2(n_1 + n_2 + 1)}{12}} \end{aligned} $$ Hence, for large samples, the standardized test statistic $$ z_U = \dfrac{U - \mu_U}{\sigma_U}\\ $$ follows approximately a standard normal distribution if the null hypothesis were true.

For small samples, the exact distribution of $W$ or $U$ should be used.

Note: the formula for the standard deviations $\sigma_W$ and $\sigma_U$ is more complicated if ties are present in the data.
Significant?Significant?Significant?
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Two sided: Right sided: Left sided: For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
Wald-type approximate $C\%$ confidence interval for $\beta_k$n.a.n.a.
$b_k \pm z^* \times SE_{b_k}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
--
Goodness of fit measure $R^2_L$n.a.n.a.
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
--
n.a.n.a.Equivalent to
--If no ties in the data: two sided Mann-Whitney-Wilcoxon test is equivalent to Kruskal-Wallis test with an independent variable with 2 levels ($I = 2$)
Example contextExample contextExample context
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?Is there a monotonic relationship between physical health and mental health?Do men tend to score higher on social economic status than women?
SPSSSPSSSPSS
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Correlate > Bivariate...
  • Put your two variables in the box below Variables
  • Under Correlation Coefficients, select Spearman
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Independent Samples...
  • Put your dependent variable in the box below Test Variable List and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
JamoviJamoviJamovi
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Regression > Correlation Matrix
  • Put your two variables in the white box at the right
  • Under Correlation Coefficients, select Spearman
  • Under Hypothesis, select your alternative hypothesis
T-Tests > Independent Samples T-Test
  • Put your dependent variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Mann-Whitney U
  • Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questions