Marginal Homogeneity test / Stuart-Maxwell test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Marginal Homogeneity test / Stuart-Maxwell test | McNemar's test | Logistic regression |
You cannot compare more than 3 methods |
---|---|---|---|
Independent variable | Independent variable | Independent variables | |
2 paired groups | 2 paired groups | One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables | |
Dependent variable | Dependent variable | Dependent variable | |
One categorical with $J$ independent groups ($J \geqslant 2$) | One categorical with 2 independent groups | One categorical with 2 independent groups | |
Null hypothesis | Null hypothesis | Null hypothesis | |
H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.
Here $\pi_j$ is the population proportion in category $j.$ | Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:
Other formulations of the null hypothesis are:
| Model chi-squared test for the complete regression model:
| |
Alternative hypothesis | Alternative hypothesis | Alternative hypothesis | |
H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group. | The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0. Other formulations of the alternative hypothesis are:
| Model chi-squared test for the complete regression model:
| |
Assumptions | Assumptions | Assumptions | |
|
|
| |
Test statistic | Test statistic | Test statistic | |
Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand. | $X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0. | Model chi-squared test for the complete regression model:
The wald statistic can be defined in two ways:
Likelihood ratio chi-squared test for individual $\beta_k$:
| |
Sampling distribution of the test statistic if H0 were true | Sampling distribution of $X^2$ if H0 were true | Sampling distribution of $X^2$ and of the Wald statistic if H0 were true | |
Approximately the chi-squared distribution with $J - 1$ degrees of freedom | If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom. If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$. | Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
| |
Significant? | Significant? | Significant? | |
If we denote the test statistic as $X^2$:
| For test statistic $X^2$:
| For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
| |
n.a. | n.a. | Wald-type approximate $C\%$ confidence interval for $\beta_k$ | |
- | - | $b_k \pm z^* \times SE_{b_k}$ where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). | |
n.a. | n.a. | Goodness of fit measure $R^2_L$ | |
- | - | $R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$ There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit. | |
n.a. | Equivalent to | n.a. | |
- |
| - | |
Example context | Example context | Example context | |
Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? | Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders? | Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes? | |
SPSS | SPSS | SPSS | |
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
| Analyze > Regression > Binary Logistic...
| |
n.a. | Jamovi | Jamovi | |
- | Frequencies > Paired Samples - McNemar test
| Regression > 2 Outcomes - Binomial
| |
Practice questions | Practice questions | Practice questions | |