One way ANOVA
This page offers all the basic information you need about one way ANOVA. It is part of Statkat’s wiki module, containing similarly structured info pages for many different statistical methods. The info pages give information about null and alternative hypotheses, assumptions, test statistics and confidence intervals, how to find p values, SPSS howto’s and more.
To compare one way ANOVA with other statistical methods, go to Statkat's or practice with one way ANOVA at Statkat's
Contents
 1. When to use
 2. Null hypothesis
 3. Alternative hypothesis
 4. Assumptions
 5. Test statistic
 6. Pooled standard deviation
 7. Sampling distribution
 8. Significant?
 9. $C\%$ confidence interval for $\Psi$, for $\mu_g  \mu_h$, and for $\mu_i$
 10. Effect size
 11. ANOVA table
 12. Equivalent to
 13. Example context
 14. SPSS
 15. Jamovi
When to use?
Deciding which statistical method to use to analyze your data can be a challenging task. Whether a statistical method is appropriate for your data is partly determined by the measurement level of your variables. One way ANOVA requires the following variable types:
Independent/grouping variable: One categorical with $I$ independent groups ($I \geqslant 2$)  Dependent variable: One quantitative of interval or ratio level 
Note that theoretically, it is always possible to 'downgrade' the measurement level of a variable. For instance, a test that can be performed on a variable of ordinal measurement level can also be performed on a variable of interval measurement level, in which case the interval variable is downgraded to an ordinal variable. However, downgrading the measurement level of variables is generally a bad idea since it means you are throwing away important information in your data (an exception is the downgrade from ratio to interval level, which is generally irrelevant in data analysis).
If you are not sure which method you should use, you might like the assistance of our method selection tool or our method selection table.
Null hypothesis
One way ANOVA tests the following null hypothesis (H_{0}):
ANOVA $F$ test: H_{0}: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
 H_{0}: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
 H_{0}: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
Alternative hypothesis
One way ANOVA tests the above null hypothesis against the following alternative hypothesis (H_{1} or H_{a}):
ANOVA $F$ test: H_{1}: not all population means are equal
 H_{1} two sided: $\Psi \neq 0$
 H_{1} right sided: $\Psi > 0$
 H_{1} left sided: $\Psi < 0$
 H_{1}  usually two sided: $\mu_g \neq \mu_h$
Assumptions
Statistical tests always make assumptions about the sampling procedure that's been used to obtain the sample data. So called parametric tests also make assumptions about how data are distributed in the population. Nonparametric tests are more 'robust' and make no or less strict assumptions about population distributions, but are generally less powerful. Violation of assumptions may render the outcome of statistical tests useless, although violation of some assumptions (e.g. independence assumptions) are generally more problematic than violation of other assumptions (e.g. normality assumptions in combination with large samples).
One way ANOVA makes the following assumptions:
 Within each population, the scores on the dependent variable are normally distributed
 The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
 Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
Test statistic
One way ANOVA is based on the following test statistic:
ANOVA $F$ test: $\begin{aligned}[t]
F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean}  \mbox{overall mean})^2 / (I  1)}{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2 / (N  I)}\\
&= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\
&= \dfrac{\mbox{mean square between}}{\mbox{mean square error}}
\end{aligned}
$
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model; mean square error is also known as mean square residual or mean square within
 $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
 $t = \dfrac{\bar{y}_g  \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
Pooled standard deviation
$ \begin{aligned} s_p &= \sqrt{\dfrac{(n_1  1) \times s^2_1 + (n_2  1) \times s^2_2 + \ldots + (n_I  1) \times s^2_I}{N  I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score}  \mbox{its group mean})^2}{N  I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $where $s^2_i$ is the variance in group $i$
Sampling distribution
Sampling distribution of $F$ and of $t$ if H_{0} were true:Sampling distribution of $F$:
 $F$ distribution with $I  1$ (df between, numerator) and $N  I$ (df error, denominator) degrees of freedom
 $t$ distribution with $N  I$ degrees of freedom
Significant?
This is how you find out if your test result is significant:
$F$ test: Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
 Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)
$t$ Test for contrast two sided:
 Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
 Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
 Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
 Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
 Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
 Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test multiple comparisons two sided:
 Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
 Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
 Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
 Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
 Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
 Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$C\%$ confidence interval for $\Psi$, for $\mu_g  \mu_h$, and for $\mu_i$
Confidence interval for $\Psi$ (contrast):
$c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}$
where the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.

$(\bar{y}_g  \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}$
where $t^{**}$ depends upon $C$, degrees of freedom ($N  I$), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, $t^{**} = t^* = $ the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$. Note that $n_g$ is the sample size of group $g$, $n_h$ is the sample size of group $h$, and $N$ is the total sample size, based on all the $I$ groups.

$\bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}}$
where $\bar{y}_i$ is the sample mean for group $i$, $n_i$ is the sample size for group $i$, and the critical value $t^*$ is the value under the $t_{N  I}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20). Note that $n_i$ is the sample size of group $i$, and $N$ is the total sample size, based on all the $I$ groups.
Effect size
 Proportion variance explained $\eta^2$ and $R^2$:
Proportion variance of the dependent variable $y$ explained by the independent variable: $$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$ Only in one way ANOVA $\eta^2 = R^2$. $\eta^2$ (and $R^2$) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population.
 Proportion variance explained $\omega^2$:
Corrects for the positive bias in $\eta^2$ and is equal to: $$\omega^2 = \frac{\mbox{sum of squares between}  \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$ $\omega^2$ is a better estimate of the explained variance in the population than $\eta^2$.
 Cohen's $d$:
Standardized difference between the mean in group $g$ and in group $h$: $$d_{g,h} = \frac{\bar{y}_g  \bar{y}_h}{s_p}$$ Indicates how many standard deviations $s_p$ two sample means are removed from each other
ANOVA table
This is how the entries of the ANOVA table are computed:
Click the link for a step by step explanation of how to compute the sum of squares
Equivalent to
One way ANOVA is equivalent to:
OLS regression with one, categorical independent variable transformed into $I  1$ code variables: $F$ test ANOVA equivalent to $F$ test regression model
 $t$ test for contrast $i$ equivalent to $t$ test for regression coefficient $\beta_i$ (specific contrast tested depends on how the code variables are defined)
Example context
One way ANOVA could for instance be used to answer the question:
Is the average mental health score different between people from a low, moderate, and high economic class?SPSS
How to perform a one way ANOVA in SPSS:
Analyze > Compare Means > OneWay ANOVA... Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor
Analyze > General Linear Model > Univariate...
 Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s)
Jamovi
How to perform a one way ANOVA in jamovi:
ANOVA > ANOVA Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors