Logistic regression
This page offers all the basic information you need about logistic regression analysis. It is part of Statkat’s wiki module, containing similarly structured info pages for many different statistical methods. The info pages give information about null and alternative hypotheses, assumptions, test statistics and confidence intervals, how to find p values, SPSS how-to’s and more.
To compare logistic regression analysis with other statistical methods, go to Statkat's or practice with logistic regression analysis at Statkat's
Contents
- 1. When to use
- 2. Null hypothesis
- 3. Alternative hypothesis
- 4. Assumptions
- 5. Test statistic
- 6. Sampling distribution
- 7. Significant?
- 8. Wald-type approximate $C\%$ confidence interval for $\beta_k$
- 9. Goodness of fit measure $R^2_L$
- 10. Example context
- 11. SPSS
- 12. Jamovi
When to use?
Deciding which statistical method to use to analyze your data can be a challenging task. Whether a statistical method is appropriate for your data is partly determined by the measurement level of your variables.
Logistic regression analysis requires the following variable types:
Independent variables: One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | Dependent variable: One categorical with 2 independent groups |
Note that theoretically, it is always possible to 'downgrade' the measurement level of a variable. For instance, a test that can be performed on a variable of ordinal measurement level can also be performed on a variable of interval measurement level, in which case the interval variable is downgraded to an ordinal variable. However, downgrading the measurement level of variables is generally a bad idea since it means you are throwing away important information in your data (an exception is the downgrade from ratio to interval level, which is generally irrelevant in data analysis).
If you are not sure which method you should use, you might like the assistance of our method selection tool or our method selection table.
Null hypothesis
Logistic regression analysis tests the following null hypothesis (H0):
Model chi-squared test for the complete regression model:- H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
- H0: $\beta_k = 0$
or in terms of odds ratio: - H0: $e^{\beta_k} = 1$
- H0: $\beta_k = 0$
or in terms of odds ratio: - H0: $e^{\beta_k} = 1$
Alternative hypothesis
Logistic regression analysis tests the above null hypothesis against the following alternative hypothesis (H1 or Ha):
Model chi-squared test for the complete regression model:- H1: not all population regression coefficients are 0
- H1: $\beta_k \neq 0$
or in terms of odds ratio: - H1: $e^{\beta_k} \neq 1$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested: - H1 right sided: $\beta_k > 0$
- H1 left sided: $\beta_k < 0$
- H1: $\beta_k \neq 0$
or in terms of odds ratio: - H1: $e^{\beta_k} \neq 1$
Assumptions
Statistical tests always make assumptions about the sampling procedure that was used to obtain the sample data. So called parametric tests also make assumptions about how data are distributed in the population. Non-parametric tests are more 'robust' and make no or less strict assumptions about population distributions, but are generally less powerful. Violation of assumptions may render the outcome of statistical tests useless, although violation of some assumptions (e.g. independence assumptions) are generally more problematic than violation of other assumptions (e.g. normality assumptions in combination with large samples).
Logistic regression analysis makes the following assumptions:
- In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
- The residuals are independent of one another
- Variables are measured without error
- Multicollinearity
- Outliers
Test statistic
Logistic regression analysis is based on the following test statistic:
Model chi-squared test for the complete regression model:- $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
$D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
The wald statistic can be defined in two ways:
- Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
- Wald $ = \dfrac{b_k}{SE_{b_k}}$
Likelihood ratio chi-squared test for individual $\beta_k$:
- $X^2 = D_{K-1} - D_K$
$D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
Sampling distribution
Sampling distribution of $X^2$ and of the Wald statistic if H0 were true:Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
- chi-squared distribution with $K$ (number of independent variables) degrees of freedom
- If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
- If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
- chi-squared distribution with 1 degree of freedom
Significant?
This is how you find out if your test result is significant:
For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:- Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
- Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
- If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
- If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Wald-type approximate $C\%$ confidence interval for $\beta_k$
$b_k \pm z^* \times SE_{b_k}$where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Goodness of fit measure $R^2_L$
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
Example context
Logistic regression analysis could for instance be used to answer the question:
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?SPSS
How to perform a logistic regression analysis in SPSS:
Analyze > Regression > Binary Logistic...- Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Jamovi
How to perform a logistic regression analysis in jamovi:
Regression > 2 Outcomes - Binomial- Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
- If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
- Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'