One sample t test for the mean - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

One sample $t$ test for the mean
Two sample $z$ test
Two sample $t$ test - equal variances not assumed
Pearson correlation
Independent variableIndependent variableIndependent variableVariable 1
NoneOne categorical with 2 independent groupsOne categorical with 2 independent groupsOne quantitative of interval or ratio level
Dependent variableDependent variableDependent variableVariable 2
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
$\mu = \mu_0$
$\mu$ is the unknown population mean; $\mu_0$ is the population mean according to the null hypothesis
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
$\mu_1 = \mu_2$
$\mu_1$ is the unknown mean in population 1, $\mu_2$ is the unknown mean in population 2
$\rho = \rho_0$
$\rho$ is the unknown Pearson correlation in the population, $\rho_0$ is the correlation in the population according to the null hypothesis (usually 0)
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
Two sided: $\mu_1 \neq \mu_2$
Right sided: $\mu_1 > \mu_2$
Left sided: $\mu_1 < \mu_2$
Two sided: $\rho \neq \rho_0$
Right sided: $\rho > \rho_0$
Left sided: $\rho < \rho_0$
AssumptionsAssumptionsAssumptionsAssumptions of tests for correlation
  • Scores are normally distributed in the population
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Population standard deviations $\sigma_1$ and $\sigma_2$ are known
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
Test statisticTest statisticTest statisticTest statistic
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to H0, $s$ is the sample standard deviation, $N$ is the sample size.

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to H0.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$
Test statistic for testing H0: $\rho = 0$:
  • $t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}} $
    where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
  • $z = \dfrac{r_{Fisher} - \rho_{0_{Fisher}}}{\sqrt{\dfrac{1}{N - 3}}}$
    • $r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
    • $\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
Sampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $t$ and of $z$ if H0 were true
$t$ distribution with $N - 1$ degrees of freedomStandard normalApproximately a $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations
Sampling distribution of $t$:
  • $t$ distribution with $N - 2$ degrees of freedom
Sampling distribution of $z$:
  • Approximately standard normal
Significant?Significant?Significant?Significant?
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: $t$ Test two sided: $t$ Test right sided: $t$ Test left sided: $z$ Test two sided: $z$ Test right sided: $z$ Test left sided:
$C\%$ confidence interval for $\mu$$C\%$ confidence interval for $\mu_1 - \mu_2$Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$Approximate $C$% confidence interval for $\rho$
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)

The confidence interval for $\mu$ can also be used as significance test.
$(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
First compute approximate $C$% confidence interval for $\rho_{Fisher}$:
  • $lower_{Fisher} = r_{Fisher} - z^* \times \sqrt{\dfrac{1}{N - 3}}$
  • $upper_{Fisher} = r_{Fisher} + z^* \times \sqrt{\dfrac{1}{N - 3}}$
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$ and $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get approximate $C$% confidence interval for $\rho$:
  • lower bound = $\dfrac{e^{2 \times lower_{Fisher}} - 1}{e^{2 \times lower_{Fisher}} + 1}$
  • upper bound = $\dfrac{e^{2 \times upper_{Fisher}} - 1}{e^{2 \times upper_{Fisher}} + 1}$
Effect sizen.a.n.a.Properties of the Pearson correlation coefficient
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0$
--
  • The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
  • The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
  • The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
  • The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable).
    For example:
    • the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y - 6$.
    • the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $-3x + 5$ and $2y - 6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $-3$.
  • The Pearson correlation coefficient does not say anything about causality.
  • The Pearson correlation coefficient is sensitive to outliers.
Visual representationVisual representationVisual representationn.a.
One sample t test
Two sample z test
Two sample t test - equal variances not assumed
-
n.a.n.a.n.a.Equivalent to
---OLS regression with one independent variable:
  • $b_1 = r \times \frac{s_y}{s_x}$
  • Results significance test ($t$ and $p$ value) testing $H_0$: $\beta_1 = 0$ are equivalent to results significance test testing $H_0$: $\rho = 0$
Example contextExample contextExample contextExample context
Is the average mental health score of office workers different from $\mu_0$ = 50?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1$ = 2 amongst men and $\sigma_2$ = 2.5 amongst women.Is the average mental health score different between men and women?Is there a linear relationship between physical health and mental health?
SPSSn.a.SPSSSPSS
Analyze > Compare Means > One-Sample T Test...
  • Put your variable in the box below Test Variable(s)
  • Fill in the value for $\mu_0$ in the box next to Test Value
-Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Correlate > Bivariate...
  • Put your two variables in the box below Variables
Jamovin.a.JamoviJamovi
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
-T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
Regression > Correlation Matrix
  • Put your two variables in the white box at the right
  • Under Correlation Coefficients, select Pearson (selected by default)
  • Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questions