One sample t test for the mean - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

One sample $t$ test for the mean
Two sample $z$ test
Two sample $t$ test - equal variances not assumed
Logistic regression
Marginal Homogeneity test / Stuart-Maxwell test
Independent variableIndependent/grouping variableIndependent/grouping variableIndependent variablesIndependent variable
NoneOne categorical with 2 independent groupsOne categorical with 2 independent groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables2 paired groups
Dependent variableDependent variableDependent variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne categorical with 2 independent groupsOne categorical with $J$ independent groups ($J \geqslant 2$)
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
H0: $\mu_1 = \mu_2$

Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Model chi-squared test for the complete regression model:
  • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H0: $\beta_k = 0$
    or in terms of odds ratio:
  • H0: $e^{\beta_k} = 1$
in the regression equation $ \ln \big(\frac{\pi_{y = 1}}{1 - \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K $. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
H0: for each category $j$ of the dependent variable, $\pi_j$ for the first paired group = $\pi_j$ for the second paired group.

Here $\pi_j$ is the population proportion in category $j.$
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
Model chi-squared test for the complete regression model:
  • H1: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
    If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
  • H1 right sided: $\beta_k > 0$
  • H1 left sided: $\beta_k < 0$
Likelihood ratio chi-squared test for individual regression coefficient $\beta_k$:
  • H1: $\beta_k \neq 0$
    or in terms of odds ratio:
  • H1: $e^{\beta_k} \neq 1$
H1: for some categories of the dependent variable, $\pi_j$ for the first paired group $\neq$ $\pi_j$ for the second paired group.
AssumptionsAssumptionsAssumptionsAssumptionsAssumptions
  • Scores are normally distributed in the population
  • Sample is a simple random sample from the population. That is, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Population standard deviations $\sigma_1$ and $\sigma_2$ are known
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • Within each population, the scores on the dependent variable are normally distributed
  • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
  • In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1 - \pi_{y=1}})$ is linear
  • The residuals are independent of one another
Often ignored additional assumption:
  • Variables are measured without error
Also pay attention to:
  • Multicollinearity
  • Outliers
  • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Test statisticTest statisticTest statisticTest statisticTest statistic
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.

The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.

Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Model chi-squared test for the complete regression model:
  • $X^2 = D_{null} - D_K = \mbox{null deviance} - \mbox{model deviance} $
    $D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
  • Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
  • Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition.

Likelihood ratio chi-squared test for individual $\beta_k$:
  • $X^2 = D_{K-1} - D_K$
    $D_{K-1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
Computing the test statistic is a bit complicated and involves matrix algebra. Unless you are following a technical course, you probably won't need to calculate it by hand.
Sampling distribution of $t$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $X^2$ and of the Wald statistic if H0 were trueSampling distribution of the test statistic if H0 were true
$t$ distribution with $N - 1$ degrees of freedomStandard normal distributionApproximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1

First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Sampling distribution of $X^2$, as computed in the model chi-squared test for the complete model:
  • chi-squared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chi-squared distribution with 1 degree of freedom
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chi-squared test for individual $\beta_k$:
  • chi-squared distribution with 1 degree of freedom
Approximately the chi-squared distribution with $J - 1$ degrees of freedom
Significant?Significant?Significant?Significant?Significant?
Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: For the model chi-squared test for the complete regression model and likelihood ratio chi-squared test for individual $\beta_k$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
  • If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chi-squared tests. Wald can be interpret as $X^2$
  • If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
If we denote the test statistic as $X^2$:
  • Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
  • Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
$C\%$ confidence interval for $\mu$$C\%$ confidence interval for $\mu_1 - \mu_2$Approximate $C\%$ confidence interval for $\mu_1 - \mu_2$Wald-type approximate $C\%$ confidence interval for $\beta_k$n.a.
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu$ can also be used as significance test.
$(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
$b_k \pm z^* \times SE_{b_k}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
-
Effect sizen.a.n.a.Goodness of fit measure $R^2_L$n.a.
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$
--$R^2_L = \dfrac{D_{null} - D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
-
Visual representationVisual representationVisual representationn.a.n.a.
One sample t test
Two sample z test
Two sample t test - equal variances not assumed
--
Example contextExample contextExample contextExample contextExample context
Is the average mental health score of office workers different from $\mu_0 = 50$?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is $\sigma_1 = 2$ amongst men and $\sigma_2 = 2.5$ amongst women.Is the average mental health score different between men and women?Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best?
SPSSn.a.SPSSSPSSSPSS
Analyze > Compare Means > One-Sample T Test...
  • Put your variable in the box below Test Variable(s)
  • Fill in the value for $\mu_0$ in the box next to Test Value
-Analyze > Compare Means > Independent-Samples T Test...
  • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
  • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
  • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
  • Continue and click OK
Analyze > Regression > Binary Logistic...
  • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
  • Put the two paired variables in the boxes below Variable 1 and Variable 2
  • Under Test Type, select the Marginal Homogeneity test
Jamovin.a.JamoviJamovin.a.
T-Tests > One Sample T-Test
  • Put your variable in the box below Dependent Variables
  • Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
-T-Tests > Independent Samples T-Test
  • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
  • Under Tests, select Welch's
  • Under Hypothesis, select your alternative hypothesis
Regression > 2 Outcomes - Binomial
  • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
  • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
  • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
-
Practice questionsPractice questionsPractice questionsPractice questionsPractice questions