Logistic regression - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Logistic regression
Two sample $t$ test - equal variances not assumed
One way ANOVA
Independent variablesIndependent variableIndependent variable
One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variablesOne categorical with 2 independent groupsOne categorical with $I$ independent groups ($I \geqslant 2$)
Dependent variableDependent variableDependent variable
One categorical with 2 independent groupsOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesis
Model chi-squared test for the complete regression model:
  • $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • $\beta_k = 0$
    or in terms of odds ratio:
  • $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
ANOVA $F$ test:
  • $\mu_1 = \mu_2 = \ldots = \mu_I$
    $\mu_1$ is the unknown mean in population 1; $\mu_2$ is the unknown mean in population 2; $\mu_I$ is the unknown mean in population $I$
$t$ Test for contrast:
  • $\Psi = 0$
    $\Psi$ is a contrast in the population, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the unknown mean in population $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
  • $\mu_g = \mu_h$
    $\mu_g$ is the unknown mean in population $g$; $\mu_h$ is the unknown mean in population $h$
Alternative hypothesisAlternative hypothesisAlternative hypothesis
Model chi-squared test for the complete regression model:
  • not all population regression coefficients are 0
Wald test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • right sided: $\beta_k > 0$
  • left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual $\beta_k$:
  • $\beta_k \neq 0$
    or in terms of odds ratio:
  • $e^{\beta_k} \neq 1$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
ANOVA $F$ test:
  • Not all population means are equal
$t$ Test for contrast:
  • Two sided: $\Psi \neq 0$
  • Right sided: $\Psi > 0$
  • Left sided: $\Psi < 0$
$t$ Test multiple comparisons:
  • Usually two sided: $\mu_g \neq \mu_h$
AssumptionsAssumptionsAssumptions
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statisticTest statisticTest statistic
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
ANOVA $F$ test:
  • $\begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned} $
    where $N$ is the total sample size, and $I$ is the number of groups.
    Note: mean square between is also known as mean square model; mean square error is also known as mean square residual or mean square within
$t$ Test for contrast:
  • $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
    Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
    Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
  • $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
    $\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
    Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
n.a.n.a.Pooled standard deviation
--$ \begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $
where $s^2_i$ is the variance in group $i$
Sampling distribution of $X^2$ and of the Wald statistic if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were true
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately a chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately a standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Approximately a $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations
Sampling distribution of $F$:
  • $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
  • $t$ distribution with $N - I$ degrees of freedom
Significant?Significant?Significant?
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Two sided: Right sided: Left sided: $F$ test:
  • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
  • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)

$t$ Test for contrast two sided: $t$ Test for contrast right sided: $t$ Test for contrast left sided:
$t$ Test multiple comparisons two sided:
  • Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
  • Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
  • Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
  • Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
Wald-type approximate $C\%$ confidence interval for $\beta_k$Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$$C\%$ confidence interval for $\Psi$, for $\mu_g - \mu_h$, and for $\mu_i$
$b_k \pm z^* \times SE_{b_k}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
Confidence interval for $\Psi$ (contrast):
  • $c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
    where the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for $\mu_g - \mu_h$ (multiple comparisons):
  • $(\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
    where $t^{**}$ depends upon $C$, degrees of freedom ($N - I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* = $ the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.
Confidence interval for single population mean $\mu_i$:
  • $\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
    where $\bar{y}_i$ is the sample mean for group $i$, $n_i$ is the sample size for group $i$, and the critical value $t^*$ is the value under the $t_{N - I}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Goodness of fit measure $R^2_L$n.a.Effect size
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
-
  • Proportion variance explained $\eta^2$ and $R^2$:
    Proportion variance of the dependent variable $y$ explained by the independent variable: $$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$ Only in one way ANOVA $\eta^2 = R^2$. $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.

  • Proportion variance explained $\omega^2$:
    Corrects for the positive bias in $\eta^2$ and is equal to: $$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$.

  • Cohen's $d$:
    Standardized difference between the mean in group $g$ and in group $h$: $$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$ Indicates how many standard deviations $s_p$ two sample means are removed from each other
n.a.Visual representationn.a.
-
Two sample t test - equal variances not assumed
-
n.a.n.a.ANOVA table
--
ANOVA table

Click the link for a step by step explanation of how to compute the sum of squares
n.a.n.a.Equivalent to
--OLS regression with one, categorical independent variable transformed into $I - 1$ code variables:
  • $F$ test ANOVA equivalent to $F$ test regression model
  • $t$ test for contrast $i$ equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
Example contextExample contextExample context
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?Is the average mental health score different between men and women?Is the average mental health score different between people from a low, moderate, and high economic class?
SPSSSPSSSPSS
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Compare Means > One-Way ANOVA...
  • Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
or
Analyze > General Linear Model > Univariate...
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
JamoviJamoviJamovi
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
ANOVA > ANOVA
  • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors
Practice questionsPractice questionsPractice questions