# Two sample t test - equal variances not assumed - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

Two sample $t$ test - equal variances not assumed
One way ANOVA
Friedman test
Regression (OLS)
Independent/grouping variableIndependent/grouping variableIndependent/grouping variableIndependent variables
One categorical with 2 independent groupsOne categorical with $I$ independent groups ($I \geqslant 2$)One within subject factor ($\geq 2$ related groups)One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables
Dependent variableDependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne of ordinal levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
ANOVA $F$ test:
• H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
• H0: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
• H0: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups

Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
$F$ test for the complete regression model:
• H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
or equivalenty
• H0: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
• H0: $\beta_k = 0$
in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $x_i$ represents independent variable $i$, $\beta_i$ is the regression weight for independent variable $x_i$, and $\mu_y$ represents the population mean of the dependent variable $y$ given the scores on the independent variables.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
ANOVA $F$ test:
• H1: not all population means are equal
$t$ Test for contrast:
• H1 two sided: $\Psi \neq 0$
• H1 right sided: $\Psi > 0$
• H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
• H1 - usually two sided: $\mu_g \neq \mu_h$
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups $F$ test for the complete regression model:
• H1: not all population regression coefficients are 0
or equivalenty
• H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$
$t$ test for individual regression coefficient $\beta_k$:
• H1 two sided: $\beta_k \neq 0$
• H1 right sided: $\beta_k > 0$
• H1 left sided: $\beta_k < 0$
AssumptionsAssumptionsAssumptionsAssumptions
• Within each population, the scores on the dependent variable are normally distributed
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
• In the population, the residuals are normally distributed at each combination of values of the independent variables
• In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
• In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
• The residuals are independent of one another
• Variables are measured without error
Also pay attention to:
• Multicollinearity
• Outliers
Test statisticTest statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
ANOVA $F$ test:
• \begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned}
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
• $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
• $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$

Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.

Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.

Note: if ties are present in the data, the formula for $Q$ is more complicated.
$F$ test for the complete regression model:
• \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned}
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables.
$t$ test for individual $\beta_k$:
• $t = \dfrac{b_k}{SE_{b_k}}$
• If only one independent variable:
$SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ is more complicated.
Note 1: mean square model is also known as mean square regression, and mean square error is also known as mean square residual.
Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$
n.a.Pooled standard deviationn.a.Sample standard deviation of the residuals $s$
-\begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}

Here $s^2_i$ is the variance in group $i.$
-\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}
Sampling distribution of $t$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were true
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Sampling distribution of $F$:
• $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - I$ degrees of freedom
If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.

For small samples, the exact distribution of $Q$ should be used.
Sampling distribution of $F$:
• $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - K - 1$ (df error) degrees of freedom
Significant?Significant?Significant?Significant?
Two sided:
Right sided:
Left sided:
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)

$t$ Test for contrast two sided:
$t$ Test for contrast right sided:
$t$ Test for contrast left sided:

$t$ Test multiple comparisons two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
• Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
• Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
Approximate $C\%$ confidence interval for \mu_1 - \mu_2$$C\% confidence interval for \Psi, for \mu_g - \mu_h, and for \mu_in.a.C\% confidence interval for \beta_k and for \mu_y, C\% prediction interval for y_{new} (\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}} where the critical value t^* is the value under the t_{k} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. Confidence interval for \Psi (contrast): • c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}} where the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. Confidence interval for \mu_g - \mu_h (multiple comparisons): • (\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}} where t^{**} depends upon C, degrees of freedom (N - I), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, t^{**} = t^* = the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^*. Note that n_g is the sample size of group g, n_h is the sample size of group h, and N is the total sample size, based on all the I groups. Confidence interval for single population mean \mu_i: • \bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}} where \bar{y}_i is the sample mean in group i, n_i is the sample size of group i, and the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. -Confidence interval for \beta_k: • b_k \pm t^* \times SE_{b_k} • If only one independent variable: SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}} Confidence interval for \mu_y, the population mean of y given the values on the independent variables: • \hat{y} \pm t^* \times SE_{\hat{y}} • If only one independent variable: SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} Prediction interval for y_{new}, the score on y of a future respondent: • \hat{y} \pm t^* \times SE_{y_{new}} • If only one independent variable: SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} In all formulas, the critical value t^* is the value under the t_{N - K - 1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). n.a.Effect sizen.a.Effect size - • Proportion variance explained \eta^2 and R^2: Proportion variance of the dependent variable y explained by the independent variable:$$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$Only in one way ANOVA \eta^2 = R^2. \eta^2 (and R^2) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population. • Proportion variance explained \omega^2: Corrects for the positive bias in \eta^2 and is equal to:$$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$\omega^2 is a better estimate of the explained variance in the population than \eta^2. • Cohen's d: Standardized difference between the mean in group g and in group h:$$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$Cohen's d indicates how many standard deviations s_p two sample means are removed from each other. -Complete model: • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the sample regression equation (the independent variables):$$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$R^2 is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, \rho^2. If there is only one independent variable, R^2 = r^2: the correlation between the independent variable x and dependent variable y squared. • Wherry's R^2 / shrunken R^2: Corrects for the positive bias in R^2 and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$R^2_W is a less biased estimate than R^2 of the proportion variance explained in the population by the population regression equation, \rho^2. • Stein's R^2: Estimates the proportion of variance in y that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$Per independent variable: • Correlation squared$r^2_k$: the proportion of the total variance in the dependent variable$y$that is explained by the independent variable$x_k$, not corrected for the other independent variables in the model • Semi-partial correlation squared$sr^2_k$: the proportion of the total variance in the dependent variable$y$that is uniquely explained by the independent variable$x_k$, beyond the part that is already explained by the other independent variables in the model • Partial correlation squared$pr^2_k$: the proportion of the variance in the dependent variable$y$not explained by the other independent variables, that is uniquely explained by the independent variable$x_k$Visual representationn.a.n.a.Visual representation --Regression equations with: n.a.ANOVA tablen.a.ANOVA table - Click the link for a step by step explanation of how to compute the sum of squares. - n.a.Equivalent ton.a.n.a. -OLS regression with one categorical independent variable transformed into$I - 1$code variables: •$F$test ANOVA is equivalent to$F$test regression model •$t$test for contrast$i$is equivalent to$t$test for regression coefficient$\beta_i$(specific contrast tested depends on how the code variables are defined) -- Example contextExample contextExample contextExample context Is the average mental health score different between men and women?Is the average mental health score different between people from a low, moderate, and high economic class?Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?Can mental health be predicted from fysical health, economic class, and gender? SPSSSPSSSPSSSPSS Analyze > Compare Means > Independent-Samples T Test... • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2 • Continue and click OK Analyze > Compare Means > One-Way ANOVA... • Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor or Analyze > General Linear Model > Univariate... • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s) Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples... • Put the$k$variables containing the scores for the$k$related groups in the white box below Test Variables • Under Test Type, select the Friedman test Analyze > Regression > Linear... • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s) JamoviJamoviJamoviJamovi T-Tests > Independent Samples T-Test • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable • Under Tests, select Welch's • Under Hypothesis, select your alternative hypothesis ANOVA > ANOVA • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors ANOVA > Repeated Measures ANOVA - Friedman • Put the$k$variables containing the scores for the$k\$ related groups in the box below Measures
Regression > Linear Regression
• Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
• If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
• Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Practice questionsPractice questionsPractice questionsPractice questions