One sample t test for the mean  overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
One sample $t$ test for the mean  KruskalWallis test 


Independent variable  Independent/grouping variable  
None  One categorical with $I$ independent groups ($I \geqslant 2$)  
Dependent variable  Dependent variable  
One quantitative of interval or ratio level  One of ordinal level  
Null hypothesis  Null hypothesis  
H_{0}: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.  If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
 
Alternative hypothesis  Alternative hypothesis  
H_{1} two sided: $\mu \neq \mu_0$ H_{1} right sided: $\mu > \mu_0$ H_{1} left sided: $\mu < \mu_0$  If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
 
Assumptions  Assumptions  

 
Test statistic  Test statistic  
$t = \dfrac{\bar{y}  \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size. The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.  $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i}  3(N + 1)$  
Sampling distribution of $t$ if H_{0} were true  Sampling distribution of $H$ if H_{0} were true  
$t$ distribution with $N  1$ degrees of freedom  For large samples, approximately the chisquared distribution with $I  1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used.  
Significant?  Significant?  
Two sided:
 For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
 
$C\%$ confidence interval for $\mu$  n.a.  
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). The confidence interval for $\mu$ can also be used as significance test.    
Effect size  n.a.  
Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y}  \mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0.$    
Visual representation  n.a.  
  
Example context  Example context  
Is the average mental health score of office workers different from $\mu_0 = 50$?  Do people from different religions tend to score differently on social economic status?  
SPSS  SPSS  
Analyze > Compare Means > OneSample T Test...
 Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
 
Jamovi  Jamovi  
TTests > One Sample TTest
 ANOVA > One Way ANOVA  KruskalWallis
 
Practice questions  Practice questions  