# One sample z test for the mean - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

One sample $z$ test for the mean
$z$ test for the difference between two proportions
Spearman's rho
Pearson correlation
Independent variableIndependent/grouping variableVariable 1Variable 1
NoneOne categorical with 2 independent groupsOne of ordinal levelOne quantitative of interval or ratio level
Dependent variableDependent variableVariable 2Variable 2
One quantitative of interval or ratio levelOne categorical with 2 independent groupsOne of ordinal levelOne quantitative of interval or ratio level
Null hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\mu = \mu_0$

Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: $\pi_1 = \pi_2$

Here $\pi_1$ is the population proportion of 'successes' for group 1, and $\pi_2$ is the population proportion of 'successes' for group 2.
H0: $\rho_s = 0$

Here $\rho_s$ is the Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level.

In words, the null hypothesis would be:

H0: there is no monotonic relationship between the two variables in the population.
H0: $\rho = \rho_0$

Here $\rho$ is the Pearson correlation in the population, and $\rho_0$ is the Pearson correlation in the population according to the null hypothesis (usually 0). The Pearson correlation is a measure for the strength and direction of the linear relationship between two variables of at least interval measurement level.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\pi_1 \neq \pi_2$
H1 right sided: $\pi_1 > \pi_2$
H1 left sided: $\pi_1 < \pi_2$
H1 two sided: $\rho_s \neq 0$
H1 right sided: $\rho_s > 0$
H1 left sided: $\rho_s < 0$
H1 two sided: $\rho \neq \rho_0$
H1 right sided: $\rho > \rho_0$
H1 left sided: $\rho < \rho_0$
AssumptionsAssumptionsAssumptionsAssumptions of test for correlation
• Scores are normally distributed in the population
• Population standard deviation $\sigma$ is known
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
• Significance test: number of successes and number of failures are each 5 or more in both sample groups
• Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
• Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.
• In the population, the two variables are jointly normally distributed (this covers the normality, homoscedasticity, and linearity assumptions)
• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Note: these assumptions are only important for the significance test and confidence interval, not for the correlation coefficient itself. The correlation coefficient just measures the strength of the linear relationship between two variables.
Test statisticTest statisticTest statisticTest statistic
$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation, and $N$ is the sample size.

The denominator $\sigma / \sqrt{N}$ is the standard deviation of the sampling distribution of $\bar{y}$. The $z$ value indicates how many of these standard deviations $\bar{y}$ is removed from $\mu_0$.
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
Here $p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2.
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1.$
$t = \dfrac{r_s \times \sqrt{N - 2}}{\sqrt{1 - r_s^2}}$
Here $r_s$ is the sample Spearman correlation and $N$ is the sample size. The sample Spearman correlation $r_s$ is equal to the Pearson correlation applied to the rank scores.
Test statistic for testing H0: $\rho = 0$:
• $t = \dfrac{r \times \sqrt{N - 2}}{\sqrt{1 - r^2}}$
where $r$ is the sample correlation $r = \frac{1}{N - 1} \sum_{j}\Big(\frac{x_{j} - \bar{x}}{s_x} \Big) \Big(\frac{y_{j} - \bar{y}}{s_y} \Big)$ and $N$ is the sample size
Test statistic for testing values for $\rho$ other than $\rho = 0$:
• $z = \dfrac{r_{Fisher} - \rho_{0_{Fisher}}}{\sqrt{\dfrac{1}{N - 3}}}$
• $r_{Fisher} = \dfrac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$, where $r$ is the sample correlation
• $\rho_{0_{Fisher}} = \dfrac{1}{2} \times \log\Bigg( \dfrac{1 + \rho_0}{1 - \rho_0} \Bigg )$, where $\rho_0$ is the population correlation according to H0
Sampling distribution of $z$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $t$ and of $z$ if H0 were true
Standard normal distributionApproximately the standard normal distributionApproximately the $t$ distribution with $N - 2$ degrees of freedomSampling distribution of $t$:
• $t$ distribution with $N - 2$ degrees of freedom
Sampling distribution of $z$:
• Approximately the standard normal distribution
Significant?Significant?Significant?Significant?
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
Two sided:
Right sided:
Left sided:
$t$ Test two sided:
$t$ Test right sided:
$t$ Test left sided:
$z$ Test two sided:
$z$ Test right sided:
$z$ Test left sided:
$C\%$ confidence interval for $\mu$Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$n.a.Approximate $C$% confidence interval for $\rho$
$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).

The confidence interval for $\mu$ can also be used as significance test.
Regular (large sample):
• $(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$
where the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
• $(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$
where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
-First compute the approximate $C$% confidence interval for $\rho_{Fisher}$:
• $lower_{Fisher} = r_{Fisher} - z^* \times \sqrt{\dfrac{1}{N - 3}}$
• $upper_{Fisher} = r_{Fisher} + z^* \times \sqrt{\dfrac{1}{N - 3}}$
where $r_{Fisher} = \frac{1}{2} \times \log\Bigg(\dfrac{1 + r}{1 - r} \Bigg )$ and the critical value $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval).
Then transform back to get the approximate $C$% confidence interval for $\rho$:
• lower bound = $\dfrac{e^{2 \times lower_{Fisher}} - 1}{e^{2 \times lower_{Fisher}} + 1}$
• upper bound = $\dfrac{e^{2 \times upper_{Fisher}} - 1}{e^{2 \times upper_{Fisher}} + 1}$
Effect sizen.a.n.a.Properties of the Pearson correlation coefficient
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{\sigma}$$ Cohen's $d$ indicates how many standard deviations $\sigma$ the sample mean $\bar{y}$ is removed from $\mu_0.$
--
• The Pearson correlation coefficient is a measure for the linear relationship between two quantitative variables.
• The Pearson correlation coefficient squared reflects the proportion of variance explained in one variable by the other variable.
• The Pearson correlation coefficient can take on values between -1 (perfect negative relationship) and 1 (perfect positive relationship). A value of 0 means no linear relationship.
• The absolute size of the Pearson correlation coefficient is not affected by any linear transformation of the variables. However, the sign of the Pearson correlation will flip when the scores on one of the two variables are multiplied by a negative number (reversing the direction of measurement of that variable).
For example:
• the correlation between $x$ and $y$ is equivalent to the correlation between $3x + 5$ and $2y - 6$.
• the absolute value of the correlation between $x$ and $y$ is equivalent to the absolute value of the correlation between $-3x + 5$ and $2y - 6$. However, the signs of the two correlation coefficients will be in opposite directions, due to the multiplication of $x$ by $-3$.
• The Pearson correlation coefficient does not say anything about causality.
• The Pearson correlation coefficient is sensitive to outliers.
Visual representationn.a.n.a.n.a.
---
n.a.Equivalent ton.a.Equivalent to
-When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels.-OLS regression with one independent variable:
• $b_1 = r \times \frac{s_y}{s_x}$
• Results significance test ($t$ and $p$ value) testing $H_0$: $\beta_1 = 0$ are equivalent to results significance test testing $H_0$: $\rho = 0$
Example contextExample contextExample contextExample context
Is the average mental health score of office workers different from $\mu_0 = 50$? Assume that the standard deviation of the mental health scores in the population is $\sigma = 3.$Is the proportion of smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.Is there a monotonic relationship between physical health and mental health?Is there a linear relationship between physical health and mental health?
n.a.SPSSSPSSSPSS
-SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Analyze > Descriptive Statistics > Crosstabs...
• Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
• Click the Statistics... button, and click on the square in front of Chi-square
• Continue and click OK
Analyze > Correlate > Bivariate...
• Put your two variables in the box below Variables
• Under Correlation Coefficients, select Spearman
Analyze > Correlate > Bivariate...
• Put your two variables in the box below Variables
n.a.JamoviJamoviJamovi
-Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:

Frequencies > Independent Samples - $\chi^2$ test of association
• Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
Regression > Correlation Matrix
• Put your two variables in the white box at the right
• Under Correlation Coefficients, select Spearman
• Under Hypothesis, select your alternative hypothesis
Regression > Correlation Matrix
• Put your two variables in the white box at the right
• Under Correlation Coefficients, select Pearson (selected by default)
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questions