Two sample t test - equal variances not assumed - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two sample $t$ test - equal variances not assumed
Spearman's rho
McNemar's test
Paired sample $t$ test
Logistic regression
Independent/grouping variableVariable 1Independent variableIndependent variableIndependent variables
One categorical with 2 independent groupsOne of ordinal level2 paired groups2 paired groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Dependent variableVariable 2Dependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne of ordinal levelOne categorical with 2 independent groupsOne quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\rho_s = 0$

Here $\rho_s$ is the Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level.

In words, the null hypothesis would be:

H0: there is no monotonic relationship between the two variables in the population.

Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:

  1. First score of pair is 0, second score of pair is 0
  2. First score of pair is 0, second score of pair is 1 (switched)
  3. First score of pair is 1, second score of pair is 0 (switched)
  4. First score of pair is 1, second score of pair is 1
The null hypothesis H0 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) = P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is the same as the probability that a pair of scores switches from 1 to 0.

Other formulations of the null hypothesis are:

  • H0: $\pi_1 = \pi_2$, where $\pi_1$ is the population proportion of ones for the first paired group and $\pi_2$ is the population proportion of ones for the second paired group
  • H0: for each pair of scores, P(first score of pair is 1) = P(second score of pair is 1)

H0: $\mu = \mu_0$

Here $\mu$ is the population mean of the difference scores, and $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair.
Model chi-squared test for the complete regression model:
  • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\rho_s \neq 0$
H1 right sided: $\rho_s > 0$
H1 left sided: $\rho_s < 0$

The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0.

Other formulations of the alternative hypothesis are:

  • H1: $\pi_1 \neq \pi_2$
  • H1: for each pair of scores, P(first score of pair is 1) $\neq$ P(second score of pair is 1)

H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
Model chi-squared test for the complete regression model:
  • H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • H1 right sided: $\beta_k > 0$
  • H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
  • Difference scores are normally distributed in the population
  • Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
Test statisticTest statisticTest statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}} $
Here $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores.
$X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0.
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores).

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
Sampling distribution of $t$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $X^2$ and of the Wald statistic if H0 were true
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Approximately the $t$ distribution with $N - 2$ degrees of freedom

If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom.

If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$.

$t$ distribution with $N - 1$ degrees of freedomSampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Significant?Significant?Significant?Significant?Significant?
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: For test statistic $X^2$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If $b + c$ is small, the table for the binomial distribution should be used, with as test statistic $b$:
  • Check if $b$ observed in sample is in the rejection region or
  • Find two sided $p$ value corresponding to observed $b$ and check if it is equal to or smaller than $\alpha$
Two sided: Right sided: Left sided: For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$n.a.n.a.$C\%$ confidence interval for $\mu$Wald-type approximate $C\%$ confidence interval for $\beta_k$
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
--$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu$ can also be used as significance test.
$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
n.a.n.a.n.a.Effect sizeGoodness of fit measure $R^2_L$
---Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0.$
$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
Visual representationn.a.n.a.Visual representationn.a.
Two sample t test - equal variances not assumed
--
Paired sample t test
-
n.a.n.a.Equivalent toEquivalent ton.a.
--
  • One sample $t$ test on the difference scores.
  • Repeated measures ANOVA with one dichotomous within subjects factor.
-
Example contextExample contextExample contextExample contextExample context
Is the average mental health score different between men and women?Is there a monotonic relationship between physical health and mental health?Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders?Is the average difference between the mental health scores before and after an intervention different from $\mu_0 = 0$?Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
SPSSSPSSSPSSSPSSSPSS
Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Correlate > Bivariate...
  • Put your two variables in the box below Variables
  • Under Correlation Coefficients, select Spearman
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the McNemar test
Analyze > Compare Means > Paired-Samples T Test...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
JamoviJamoviJamoviJamoviJamovi
T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
Regression > Correlation Matrix
  • Put your two variables in the white box at the right
  • Under Correlation Coefficients, select Spearman
  • Under Hypothesis, select your alternative hypothesis
Frequencies > Paired Samples - McNemar test
  • Put one of the two paired variables in the box below Rows and the other paired variable in the box below Columns
T-Tests > Paired Samples T-Test
  • Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
  • Under Hypothesis, select your alternative hypothesis
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Practice questionsPractice questionsPractice questionsPractice questionsPractice questions