# One sample t test for the mean: overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

One sample $t$ test for the mean
Two sample $z$ test
Two sample $t$ test - equal variances not assumed
Wilcoxon signed-rank test
Independent variableIndependent variableIndependent variableIndependent variable
NoneOne categorical with 2 independent groupsOne categorical with 2 independent groups2 paired groups
Dependent variableDependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
$\mu = \mu_0$
$\mu$ is the unknown population mean; $\mu_0$ is the population mean according to the null hypothesis
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
The median of the difference scores is zero in the population

Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
• Two sided: the median of the difference scores is different from zero in the population
• Right sided: the median of the difference scores is larger than zero in the population
• Left sided: the median of the difference scores is smaller than zero in the population
AssumptionsAssumptionsAssumptionsAssumptions
• Scores are normally distributed in the population
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• Population standard deviations $\sigma_1$ and $\sigma_2$ are known
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• The population distribution of the difference scores is symmetric
• Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Note: sometimes it considered sufficient for the data to be measured at an ordinal scale, rather than an interval or ratio scale. However, since the test statistic is based on ranked difference scores, we need to know whether a change in scores from, say, 6 to 7 is larger than/smaller than/equal to a change from 5 to 6. This is impossible to know for ordinal scales, since for these scales the size of the difference between values is meaningless.
Test statisticTest statisticTest statisticTest statistic
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to H0, $s$ is the sample standard deviation, $N$ is the sample size.

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
Two different types of test statistics can be used; both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:
1. For each subject, compute the sign of the difference score $\mbox{sign}_d = \mbox{sgn}(\mbox{score}_2 - \mbox{score}_1)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.
2. For each subject, compute the absolute value of the difference score $|\mbox{score}_2 - \mbox{score}_1|$.
3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.
4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.
Then compute the test statistic:

• $W_1 = \sum\, R_d^{+}$
or
$W_1 = \sum\, R_d^{-}$
That is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:
• tables with critical values for $W_1$ are usually based on the smaller of $\sum\, R_d^{+}$ and $\sum\, R_d^{-}$. So if you are using such a table, pick the smaller one.
• If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \sum\, R_d^{+}$ (if you use $W_1 = \sum\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').
• $W_2 = \sum\, \mbox{sign}_d \times R_d$
That is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.
Sampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were true
$t$ Distribution with $N - 1$ degrees of freedomStandard normalApproximately a $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations
Sampling distribution of $W_1$:
If $N_r$ is large, $W_1$ is approximately normally distributed with mean $\mu_{W_1}$ and standard deviation $\sigma_{W_1}$ if the null hypothesis were true. Here $$\mu_{W_1} = \frac{N_r(N_r + 1)}{4}$$ $$\sigma_{W_1} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_1 - \mu_{W_1}}{\sigma_{W_1}}$$ follows approximately a standard normal distribution if the null hypothesis were true.

Sampling distribution of $W_2$:
If $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\sigma_{W_2}$ if the null hypothesis were true. Here $$\sigma_{W_2} = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \frac{W_2}{\sigma_{W_2}}$$ follows approximately a standard normal distribution if the null hypothesis were true.

If $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.

Note: the formula for the standard deviations $\sigma_{W_1}$ and $\sigma_{W_2}$ is more complicated if ties are present in the data.
Significant?Significant?Significant?Significant?
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
For large samples, the table for standard normal probabilities can be used:
Two sided:
Right sided:
Left sided:
$C\%$ confidence interval for $\mu$$C\% confidence interval for \mu_1 - \mu_2Approximate C\% confidence interval for \mu_1 - \mu_2n.a. \bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}} where the critical value t^* is the value under the t_{N-1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20) The confidence interval for \mu can also be used as significance test. (\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}} where z^* is the value under the normal curve with the area C / 100 between -z^* and z^* (e.g. z^* = 1.96 for a 95% confidence interval) The confidence interval for \mu_1 - \mu_2 can also be used as significance test. (\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}} where the critical value t^* is the value under the t_{k} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20) The confidence interval for \mu_1 - \mu_2 can also be used as significance test. - Effect sizen.a.n.a.n.a. Cohen's d: Standardized difference between the sample mean and \mu_0:$$d = \frac{\bar{y} - \mu_0}{s}$$Indicates how many standard deviations$s$the sample mean$\bar{y}$is removed from$\mu_0$--- Visual representationVisual representationVisual representationn.a. - Example contextExample contextExample contextExample context Is the average mental health score of office workers different from$\mu_0$= 50?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is$\sigma_1$= 2 amongst men and$\sigma_2$= 2.5 amongst women.Is the average mental health score different between men and women?Is the median of the differences between the mental health scores before and after an intervention different from 0? SPSSn.a.SPSSSPSS Analyze > Compare Means > One-Sample T Test... • Put your variable in the box below Test Variable(s) • Fill in the value for$\mu_0$in the box next to Test Value -Analyze > Compare Means > Independent-Samples T Test... • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2 • Continue and click OK Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... • Put the two paired variables in the boxes below Variable 1 and Variable 2 • Under Test Type, select the Wilcoxon test Jamovin.a.JamoviJamovi T-Tests > One Sample T-Test • Put your variable in the box below Dependent Variables • Under Hypothesis, fill in the value for$\mu_0\$ in the box next to Test Value, and select your alternative hypothesis
-T-Tests > Independent Samples T-Test
• Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
• Under Tests, select Welch's
• Under Hypothesis, select your alternative hypothesis
T-Tests > Paired Samples T-Test
• Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
• Under Tests, select Wilcoxon rank
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questions