One sample z test for the mean - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons (max. of 3) by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
One sample $z$ test for the mean | Kruskal-Wallis test |
|
---|---|---|
Independent variable | Independent/grouping variable | |
None | One categorical with $I$ independent groups ($I \geqslant 2$) | |
Dependent variable | Dependent variable | |
One quantitative of interval or ratio level | One of ordinal level | |
Null hypothesis | Null hypothesis | |
H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis. | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| |
Alternative hypothesis | Alternative hypothesis | |
H1 two sided: $\mu \neq \mu_0$ H1 right sided: $\mu > \mu_0$ H1 left sided: $\mu < \mu_0$ | If the dependent variable is measured on a continuous scale and the shape of the distribution of the dependent variable is the same in all $I$ populations:
Formulation 1:
| |
Assumptions | Assumptions | |
|
| |
Test statistic | Test statistic | |
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size. The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$. | $H = \dfrac{12}{N (N + 1)} \sum \dfrac{R^2_i}{n_i} - 3(N + 1)$ | |
Sampling distribution of $z$ if H0 were true | Sampling distribution of $H$ if H0 were true | |
Standard normal distribution | For large samples, approximately the chi-squared distribution with $I - 1$ degrees of freedom. For small samples, the exact distribution of $H$ should be used. | |
Significant? | Significant? | |
Two sided:
| For large samples, the table with critical $X^2$ values can be used. If we denote $X^2 = H$:
| |
$C\%$ confidence interval for $\mu$ | n.a. | |
$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval). The confidence interval for $\mu$ can also be used as significance test. | - | |
Effect size | n.a. | |
Cohen's $d$: Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$ | - | |
Visual representation | n.a. | |
- | ||
Example context | Example context | |
Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$ | Do people from different religions tend to score differently on social economic status? | |
n.a. | SPSS | |
- | Analyze > Nonparametric Tests > Legacy Dialogs > K Independent Samples...
| |
n.a. | Jamovi | |
- | ANOVA > One Way ANOVA - Kruskal-Wallis
| |
Practice questions | Practice questions | |