This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the righthand column. To practice with a specific method click the button at the bottom row of the table
H_{0}: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$
$t$ test for individual regression coefficient $\beta_k$:
H_{0}: $\beta_k = 0$
in the regression equation
$
\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\mu_y$ represents the population mean of the dependent variable $ y$ given the scores on the independent variables.
H_{0}: $\mu = \mu_0$
$\mu$ is the population mean; $\mu_0$ is the population mean according to the null hypothesis
H_{0}: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$
or equivalently
H_{0}: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$,
the probability of drawing an observation from condition $J$ is $\pi_J$
Model chisquared test for the complete regression model:
H_{0}: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$
Wald test for individual regression coefficient $\beta_k$:
H_{0}: $\beta_k = 0$
or in terms of odds ratio:
H_{0}: $e^{\beta_k} = 1$
Likelihood ratio chisquared test for individual regression coefficient $\beta_k$:
H_{0}: $\beta_k = 0$
or in terms of odds ratio:
H_{0}: $e^{\beta_k} = 1$
in the regression equation
$
\ln \big(\frac{\pi_{y = 1}}{1  \pi_{y = 1}} \big) = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K
$. Here $ x_i$ represents independent variable $ i$, $\beta_i$ is the regression weight for independent variable $ x_i$, and $\pi_{y = 1}$ represents the true probability that the dependent variable $ y = 1$ (or equivalently, the proportion of $ y = 1$ in the population) given the scores on the independent variables.
H_{0}: $\mu_1 = \mu_2$
$\mu_1$ is the population mean for group 1, $\mu_2$ is the population mean for group 2
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
$F$ test for the complete regression model:
H_{1}: not all population regression coefficients are 0 or equivalenty
H_{1}: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$
$t$ test for individual regression coefficient $\beta_k$:
H_{1} two sided: $\beta_k \neq 0$
H_{1} right sided: $\beta_k > 0$
H_{1} left sided: $\beta_k < 0$
H_{1} two sided: $\mu \neq \mu_0$
H_{1} right sided: $\mu > \mu_0$
H_{1} left sided: $\mu < \mu_0$
H_{1}: the population proportions are not all as specified under the null hypothesis
or equivalently
H_{1}: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis
Model chisquared test for the complete regression model:
H_{1}: not all population regression coefficients are 0
Wald test for individual regression coefficient $\beta_k$:
H_{1}: $\beta_k \neq 0$
or in terms of odds ratio:
H_{1}: $e^{\beta_k} \neq 1$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$ (see 'Test statistic'), also one sided alternatives can be tested:
H_{1} right sided: $\beta_k > 0$
H_{1} left sided: $\beta_k < 0$
Likelihood ratio chisquared test for individual regression coefficient $\beta_k$:
H_{1}: $\beta_k \neq 0$
or in terms of odds ratio:
H_{1}: $e^{\beta_k} \neq 1$
H_{1} two sided: $\mu_1 \neq \mu_2$
H_{1} right sided: $\mu_1 > \mu_2$
H_{1} left sided: $\mu_1 < \mu_2$
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
In the population, the residuals are normally distributed at each combination of values of the independent variables
In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Scores are normally distributed in the population
Sample is a simple random sample from the population. That is, observations are independent of one another
Sample size is large enough for $X^2$ to be approximately chisquared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more
Sample is a simple random sample from the population. That is, observations are independent of one another
In the population, the relationship between the independent variables and the log odds $\ln (\frac{\pi_{y=1}}{1  \pi_{y=1}})$ is linear
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Within each population, the scores on the dependent variable are normally distributed
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
$F$ test for the complete regression model:
$
\begin{aligned}[t]
F &= \dfrac{\sum (\hat{y}_j  \bar{y})^2 / K}{\sum (y_j  \hat{y}_j)^2 / (N  K  1)}\\
&= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\
&= \dfrac{\mbox{mean square model}}{\mbox{mean square error}}
\end{aligned}
$
where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
$t = \dfrac{b_k}{SE_{b_k}}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j  \hat{y}_j)^2 / (N  2)}}{\sqrt{\sum (x_j  \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j  \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$t = \dfrac{\bar{y}  \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation,
$N$ is the sample size.
$X^2 = \sum{\frac{(\mbox{observed cell count}  \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
where the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells
Model chisquared test for the complete regression model:
$X^2 = D_{null}  D_K = \mbox{null deviance}  \mbox{model deviance} $
$D_{null}$, the null deviance, is conceptually similar to the total variance of the dependent variable in OLS regression analysis. $D_K$, the model deviance, is conceptually similar to the residual variance in OLS regression analysis.
Wald test for individual $\beta_k$:
The wald statistic can be defined in two ways:
Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$
Wald $ = \dfrac{b_k}{SE_{b_k}}$
SPSS uses the first definition
Likelihood ratio chisquared test for individual $\beta_k$:
$X^2 = D_{K1}  D_K$
$D_{K1}$ is the model deviance, where independent variable $k$ is excluded from the model. $D_{K}$ is the model deviance, where independent variable $k$ is included in the model.
$t = \dfrac{(\bar{y}_1  \bar{y}_2)  0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1  \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2,
$s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2,
$n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
Note: we could just as well compute $\bar{y}_2  \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Sample standard deviation of the residuals $s$
n.a.
n.a.
n.a.
n.a.
$\begin{aligned}
s &= \sqrt{\dfrac{\sum (y_j  \hat{y}_j)^2}{N  K  1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}}
\end{aligned}
$
$F$ distribution with $K$ (df model, numerator) and $N  K  1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
$t$ distribution with $N  K  1$ (df error) degrees of freedom
$t$ distribution with $N  1$ degrees of freedom
Approximately the chisquared distribution with $J  1$ degrees of freedom
Sampling distribution of $X^2$, as computed in the model chisquared test for the complete model:
chisquared distribution with $K$ (number of independent variables) degrees of freedom
Sampling distribution of the Wald statistic:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: approximately the chisquared distribution with 1 degree of freedom
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: approximately the standard normal distribution
Sampling distribution of $X^2$, as computed in the likelihood ratio chisquared test for individual $\beta_k$:
chisquared distribution with 1 degree of freedom
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1  1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2  1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$  1 and $n_2$  1
First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Significant?
Significant?
Significant?
Significant?
Significant?
$F$ test:
Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$
$t$ Test two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$t$ Test left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
For the Wald test:
If defined as Wald $ = \dfrac{b_k^2}{SE^2_{b_k}}$: same procedure as for the chisquared tests. Wald can be interpret as $X^2$
If defined as Wald $ = \dfrac{b_k}{SE_{b_k}}$: same procedure as for any $z$ test. Wald can be interpreted as $z$.
Two sided:
Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$
$C\%$ confidence interval for $\mu$
n.a.
Waldtype approximate $C\%$ confidence interval for $\beta_k$
Approximate $C\%$ confidence interval for $\mu_1  \mu_2$
Confidence interval for $\beta_k$:
$b_k \pm t^* \times SE_{b_k}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j  \hat{y}_j)^2 / (N  2)}}{\sqrt{\sum (x_j  \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j  \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
$\hat{y} \pm t^* \times SE_{\hat{y}}$
If only one independent variable:
$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^*  \bar{x})^2}{\sum (x_j  \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
$\hat{y} \pm t^* \times SE_{y_{new}}$
If only one independent variable:
$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^*  \bar{x})^2}{\sum (x_j  \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N  K  1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N1}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
$b_k \pm z^* \times SE_{b_k}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
$(\bar{y}_1  \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}$
where the critical value $t^*$ is the value under the $t_{k}$ distribution with the area $C / 100$ between $t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
$$
\begin{align}
R^2 &= \dfrac{\sum (\hat{y}_j  \bar{y})^2}{\sum (y_j  \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\
&= 1  \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\
&= r(y, \hat{y})^2
\end{align}
$$
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to
$$R^2_W = 1  \frac{N  1}{N  K  1}(1  R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$
Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to
$$R^2_S = 1  \frac{(N  1)(N  2)(N + 1)}{(N  K  1)(N  K  2)(N)}(1  R^2)$$
Per independent variable:
Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
Semipartial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
Cohen's $d$:
Standardized difference between the sample mean and $\mu_0$:
$$d = \frac{\bar{y}  \mu_0}{s}$$
Indicates how many standard deviations $s$ the sample mean $\bar{y}$ is removed from $\mu_0$

$R^2_L = \dfrac{D_{null}  D_K}{D_{null}}$
There are several other goodness of fit measures in logistic regression. In logistic regression, there is no single agreed upon measure of goodness of fit.
Can mental health be predicted from fysical health, economic class, and gender?
Is the average mental health score of office workers different from $\mu_0$ = 50?
Is the proportion of people with a low, moderate, and high social economic status in the population different from $\pi_{low}$ = .2, $\pi_{moderate}$ = .6, and $\pi_{high}$ = .2?
Can body mass index, stress level, and gender predict whether people get diagnosed with diabetes?
Is the average mental health score different between men and women?
SPSS
SPSS
SPSS
SPSS
SPSS
Analyze > Regression > Linear...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
Analyze > Compare Means > OneSample T Test...
Put your variable in the box below Test Variable(s)
Fill in the value for $\mu_0$ in the box next to Test Value
Put your categorical variable in the box below Test Variable List
Fill in the population proportions / probabilities according to $H_0$ in the box below Expected Values. If $H_0$ states that they are all equal, just pick 'All categories equal' (default)
Analyze > Regression > Binary Logistic...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Covariate(s)
Analyze > Compare Means > IndependentSamples T Test...
Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
Continue and click OK
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Regression > Linear Regression
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
TTests > One Sample TTest
Put your variable in the box below Dependent Variables
Under Hypothesis, fill in the value for $\mu_0$ in the box next to Test Value, and select your alternative hypothesis
Frequencies > N Outcomes  $\chi^2$ Goodness of fit
Put your categorical variable in the box below Variable
Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)
Regression > 2 Outcomes  Binomial
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
TTests > Independent Samples TTest
Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
Under Tests, select Welch's
Under Hypothesis, select your alternative hypothesis