Logistic regression - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Logistic regression
Two sample $t$ test - equal variances not assumed
One sample Wilcoxon signed-rank test
Independent variablesIndependent variableIndependent variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne categorical with 2 independent groupsNone
Dependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne quantitative of interval or ratio levelOne of ordinal level
Null hypothesisNull hypothesisNull hypothesis
Model chi-squared test for the complete regression model:
  • $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
$m = m_0$
$m$ is the unknown population median; $m_0$ is the population median according to the null hypothesis
Alternative hypothesisAlternative hypothesisAlternative hypothesis
Model chi-squared test for the complete regression model:
  • not all population regression coefficients are 0
Wald test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • right sided: $\beta_k > 0$
  • left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
  • Two sided: $m \neq m_0$
  • Right sided: $m > m_0$
  • Left sided: $m < m_0$
AssumptionsAssumptionsAssumptions
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • The population distribution of the scores is symmetric
  • Sample is a simple random sample from the population. That is, observations are independent of one another
Test statisticTest statisticTest statistic
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
Two different types of test statistics can be used; both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
  1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score} - m_0)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
  2. For each subject, compute the absolute value of the difference score $|\mbox{score} - m_0|$.
  3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
  4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

  • $W_1 = \sum\, R_d^{+}$
    or
    $W_1 = \sum\, R_d^{-}$
    That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
    • tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
    • If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
  • $W_2 = \sum\, \mbox{sign}_d \times R_d$
    That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
Sampling distribution of $X^2$ and of the Wald statistic if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were true
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately a chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately a standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Approximately a $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately a standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately a standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated if ties are present in the data.
Significant?Significant?Significant?
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Two sided: Right sided: Left sided: For large samples, the table for standard normal probabilities can be used:
Two sided: Right sided: Left sided:
Wald-type approximate $C\%$ confidence interval for $\beta_k$Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$n.a.
$b_k \pm z^* \times SE_{b_k}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
-
Goodness of fit measure $R^2_L$n.a.n.a.
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
--
n.a.Visual representationn.a.
-
Two sample t test - equal variances not assumed
-
Example contextExample contextExample context
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?Is the average mental health score different between men and women?Is the median mental health score different from 50?
SPSSSPSSSPSS
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Specify the measurement level of your variable on the Variable View tab, in the column named Measure. Then go to:

Analyze > Nonparametric Tests > One Sample...
  • On the Objective tab, choose Customize Analysis
  • On the Fields tab, specify the variable for which you want to compute the Wilcoxon signed-rank test
  • On the Settings tab, choose Customize tests and check the box for 'Compare median to hypothesized (Wilcoxon signed-rank test)'. Fill in your $m_0$ in the box next to Hypothesized median
  • Click Run
  • Double click on the output table to see the full results
JamoviJamoviJamovi
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Tests, select Wilcoxon rank
  • Under Hypothesis, fill in the value for $m_0$ in the box next to Test Value, and select your alternative hypothesis
Practice questionsPractice questionsPractice questions