Esta página aún no se ha traducido para esta versión. Puede ver la versión más reciente de esta página en inglés.

You can use the Statistics and Machine Learning Toolbox™ function `anova2`

to perform a balanced two-way analysis
of variance (ANOVA). To perform two-way ANOVA for an unbalanced design,
use `anovan`

. For an example, see Two-Way ANOVA for Unbalanced Design.

As in one-way ANOVA, the data for a two-way ANOVA study can
be experimental or observational. The difference between one-way and
two-way ANOVA is that in two-way ANOVA, the effects of two factors
on a response variable are of interest. These two factors can be independent,
and have no interaction effect, or the impact of one factor on the
response variable can depend on the group (level) of the other factor.
If the two factors have no interactions, the model is called an *additive* model.

Suppose an automobile company has two factories, and each factory
makes the same three car models. The gas mileage in the cars can vary
from factory to factory and from model to model. These two factors,
factory and model, explain the differences in mileage, that is, the
response. One measure of interest is the difference in mileage due
to the production methods between factories. Another measure of interest
is the difference in the mileage of the models (irrespective of the
factory) due to different design specifications. The effects of these
measures of interest are *additive*. In addition, suppose only one model
has different gas mileage between factories, while the mileage of
the other two models is the same between factories. This is called
an *interaction* effect.
To measure an interaction effect, there must be multiple observations
for some combination of factory and car model. These multiple observations
are called *replications*.

Two-way ANOVA is a special case of the linear model. The two-way ANOVA form of the model is

$${y}_{ijr}=\mu +{\alpha}_{i}+{\beta}_{j}+{\left(\alpha \beta \right)}_{ij}+{\epsilon}_{ijr}$$

where,

*y*is an observation of the response variable._{ijr}*i*represents group*i*of row factor*A*,*i*= 1, 2, ...,*I**j*represents group*j*of column factor*B*,*j*= 1, 2, ...,*J**r*represents the replication number,*r*= 1, 2, ...,*R*

There are a total of

*N*=*I***J***R*observations.*μ*is the overall mean.*α*are the deviations of groups of row factor_{i}*A*from the overall mean*μ*due to row factor*B*. The values of*α*sum to 0._{i}$${\sum}_{i=1}^{I}{\alpha}_{i}}=0.$$

*β*are the deviations of groups in column factor_{j}*B*from the overall mean*μ*due to row factor*B*. All values in a given column of*β*are identical, and the values of_{j}*β*sum to 0._{j}$${\sum}_{j=1}^{J}{\beta}_{j}}=0.$$

*αβ*are the interactions. The values in each row and in each column of_{ij}*αβ*sum to 0._{ij}$${\sum}_{i=1}^{I}{\left(\alpha \beta \right)}_{ij}}={\displaystyle {\sum}_{j=1}^{J}{\left(\alpha \beta \right)}_{ij}}=0.$$

*ε*are the random disturbances. They are assumed to be independent, normally distributed, and have constant variance._{ijr}

In the mileage example:

*y*are the gas mileage observations,_{ijr}*μ*is the overall mean gas mileage.*α*are the deviations of each car's gas mileage from the mean gas mileage_{i}*μ*due to the car's*model*.*β*are the deviations of each car's gas mileage from the mean gas mileage_{j}*μ*due to the car's*factory*.

`anova2`

requires that data be balanced,
so each combination of model and factory must have the same number
of cars.

Two-way ANOVA tests hypotheses about the effects of factors *A* and *B*,
and their interaction on the response variable *y*.
The hypotheses about the equality of the mean response for groups
of row factor *A* are

$$\begin{array}{l}{H}_{0}:{\alpha}_{1}={\alpha}_{2}\cdots ={\alpha}_{I}\\ {H}_{1}:\text{atleastone}{\alpha}_{i}\text{isdifferent},\text{}i=1,\text{}2,\text{}\mathrm{...},\text{}I.\end{array}$$

The hypotheses about the equality of the mean response for groups
of column factor *B* are

$$\begin{array}{l}{H}_{0}:{\beta}_{1}={\beta}_{2}=\cdots ={\beta}_{J}\\ {H}_{1}:\text{atleastone}{\beta}_{j}\text{isdifferent,}j=1,\text{}2,\text{}\mathrm{...},\text{}J.\end{array}$$

The hypotheses about the interaction of the column and row factors are

$$\begin{array}{l}{H}_{0}:{\left(\alpha \beta \right)}_{ij}=0\\ {H}_{1}:\text{atleastone}{\left(\alpha \beta \right)}_{ij}\ne 0\end{array}$$

To perform balanced two-way ANOVA using `anova2`

,
you must arrange data in a specific matrix form. The columns of the
matrix must correspond to groups of the column factor, *B*.
The rows must correspond to the groups of the row factor, *A*,
with the same number of replications for each combination of the groups
of factors *A* and *B*.

Suppose that row factor *A* has three groups,
and column factor *B* has two groups (levels). Also
suppose that each combination of factors *A* and *B* has
two measurements or observations (`reps = 2`

). Then,
each group of factor *A* has six observations and
each group of factor *B* four observations.

$$\begin{array}{c}\begin{array}{cc}B=1& B=2\end{array}\\ \left[\begin{array}{cc}{y}_{111}& {y}_{121}\\ {y}_{112}& {y}_{122}\\ {y}_{211}& {y}_{221}\\ {y}_{212}& {y}_{222}\\ {y}_{311}& {y}_{321}\\ {y}_{312}& {y}_{322}\end{array}\right]\end{array}\begin{array}{c}\\ \begin{array}{c}\begin{array}{c}\\ \end{array}\}A=1\\ \begin{array}{c}\\ \end{array}\}A=2\\ \begin{array}{c}\\ \end{array}\}A=3\end{array}\end{array}$$

The subscripts indicate row, column,
and replication, respectively. For example, *y*_{221} corresponds
to the measurement for the second group of factor *A*,
the second group of factor *B*, and the first replication
for this combination.

This example shows how to perform two-way ANOVA to determine the effect of car model and factory on the mileage rating of cars.

Load and display the sample data.

```
load mileage
mileage
```

`mileage = `*6×3*
33.3000 34.5000 37.4000
33.4000 34.8000 36.8000
32.9000 33.8000 37.6000
32.6000 33.4000 36.6000
32.5000 33.7000 37.0000
33.0000 33.9000 36.7000

There are three car models (columns) and two factories (rows). The data has six mileage rows because each factory provided three cars of each model for the study (i.e., the replication number is three). The data from the first factory is in the first three rows, and the data from the second factory is in the last three rows.

Perform two-way ANOVA. Return the structure of statistics, `stats`

, to use in multiple comparisons.

```
nmbcars = 3; % Number of cars from each model, i.e., number of replications
[~,~,stats] = anova2(mileage,nmbcars);
```

You can use the *F*-statistics to do hypotheses tests to find out if the mileage is the same across models, factories, and model - factory pairs. Before performing these tests, you must adjust for the additive effects. `anova2`

returns the *p*-value from these tests.

The *p*-value for the model effect (`Columns`

) is zero to four decimal places. This result is a strong indication that the mileage varies from one model to another.

The *p*-value for the factory effect (`Rows`

) is 0.0039, which is also highly significant. This value indicates that one factory is out-performing the other in the gas mileage of the cars it produces. The observed *p*-value indicates that an *F*-statistic as extreme as the observed *F* occurs by chance about four out of 1000 times, if the gas mileage were truly equal from factory to factory.

The factories and models appear to have no interaction. The *p*-value, 0.8411, means that the observed result is likely (84 out of 100 times), given that there is no interaction.

Perform Multiple Comparisons to find out which pair of the three car models is significantly different.

c = multcompare(stats)

Note: Your model includes an interaction term. A test of main effects can be difficult to interpret when the model includes interactions.

`c = `*3×6*
1.0000 2.0000 -1.5865 -1.0667 -0.5469 0.0004
1.0000 3.0000 -4.5865 -4.0667 -3.5469 0.0000
2.0000 3.0000 -3.5198 -3.0000 -2.4802 0.0000

In the matrix `c`

, the first two columns show the pairs of car models that are compared. The last column shows the *p*-values for the test. All *p*-values are small (0.0004, 0, and 0), which indicates that the mean mileage of all car models are significantly different from each other.

In the figure, the blue bar is the comparison interval for the mean mileage of the first car model. The red bars are the comparison intervals for the mean mileage of the second and third car models. None of the second and third comparison intervals overlap with the first comparison interval, indicating that the mean mileage of the first car model is different from the mean mileage of the second and the third car models. If you click on one of the other bars, you can test for the other car models. None of the comparison intervals overlap, indicating that the mean mileage of each car model is significantly different from the other two.

The two-factor ANOVA partitions the total variation into the following components:

Variation of row factor group means from the overall mean, $${\overline{y}}_{i\mathrm{..}}-{\overline{y}}_{\mathrm{...}}$$

Variation of column factor group means from the overall mean, $${\overline{y}}_{.j.}-{\overline{y}}_{\mathrm{...}}$$

Variation of overall mean plus the replication mean from the column factor group mean plus row factor group mean, $${\overline{y}}_{ij.}-{\overline{y}}_{i\mathrm{..}}-{\overline{y}}_{.j.}+{\overline{y}}_{\mathrm{...}}$$

Variation of observations from the replication means, $${y}_{ijk}-{\overline{y}}_{ij.}$$

ANOVA partitions the total sum of squares (SST) into
the sum of squares due to row factor *A* (SS_{A}),
the sum of squares due to column factor *B* (SS_{B}),
the sum of squares due to interaction between *A* and *B* (SS_{AB}),
and the sum of squares error (SSE).

$$\begin{array}{l}\underset{SST}{\underbrace{{\displaystyle \sum _{i=1}^{m}{\displaystyle \sum _{j=1}^{k}{\displaystyle \sum _{r=1}^{R}{\left({y}_{ijk}-{\overline{y}}_{\mathrm{...}}\right)}^{2}}}}}}=\underset{S{S}_{B}}{\underbrace{kR{\displaystyle \sum _{i=1}^{m}{\left({\overline{y}}_{i\mathrm{..}}-{\overline{y}}_{\mathrm{...}}\right)}^{2}}}}+\underset{S{S}_{A}}{\underbrace{mR{\displaystyle \sum _{j=1}^{k}{\left({\overline{y}}_{.j.}-{\overline{y}}_{\mathrm{...}}\right)}^{2}}}}\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}+\underset{S{S}_{AB}}{\underbrace{R{\displaystyle \sum _{i=1}^{m}{\displaystyle \sum _{j=1}^{k}{\left({\overline{y}}_{ij.}-{\overline{y}}_{i\mathrm{..}}-{\overline{y}}_{.j.}+{\overline{y}}_{\mathrm{...}}\right)}^{2}}}}}+\underset{SSE}{\underbrace{{\displaystyle \sum _{i=1}^{m}{\displaystyle \sum _{j=1}^{k}{\displaystyle \sum _{r=1}^{R}{\left({y}_{ijk}-{\overline{y}}_{ij.}\right)}^{2}}}}}}\end{array}$$

ANOVA takes the variation
due to the factor or interaction and compares it to the variation
due to error. If the ratio of the two variations is high, then the
effect of the factor or the interaction effect is statistically significant.
You can measure the statistical significance using a test statistic
that has an *F*-distribution.

For the null hypothesis that the mean response for groups of
the row factor *A* are equal, the test statistic
is

$$F=\frac{\raisebox{1ex}{$S{S}_{B}$}\!\left/ \!\raisebox{-1ex}{$m-1$}\right.}{\raisebox{1ex}{$SSE$}\!\left/ \!\raisebox{-1ex}{$mk\left(R-1\right)$}\right.}\sim {F}_{m-1,mk\left(R-1\right)}.$$

For the null hypothesis that the mean response for groups of
the column factor *B* are equal, the test statistic
is

$$F=\frac{\raisebox{1ex}{$S{S}_{A}$}\!\left/ \!\raisebox{-1ex}{$k-1$}\right.}{\raisebox{1ex}{$SSE$}\!\left/ \!\raisebox{-1ex}{$mk\left(R-1\right)$}\right.}\sim {F}_{k-1,mk\left(R-1\right)}.$$

For the null hypothesis that the interaction of the column and row factors are equal to zero, the test statistic is

$$F=\frac{\raisebox{1ex}{$S{S}_{AB}$}\!\left/ \!\raisebox{-1ex}{$\left(m-1\right)\left(k-1\right)$}\right.}{\raisebox{1ex}{$SSE$}\!\left/ \!\raisebox{-1ex}{$mk\left(R-1\right)$}\right.}\sim {F}_{\left(m-1\right)\left(k-1\right),mk\left(R-1\right)}.$$

If the *p*-value for the *F*-statistic
is smaller than the significance level, then ANOVA rejects the null
hypothesis. The most common significance levels are 0.01 and 0.05.

The ANOVA table captures the variability in the model by the
source, the *F*-statistic for testing the significance
of this variability, and the *p*-value for deciding
on the significance of this variability. The *p*-value
returned by `anova2`

depends on assumptions about
the random disturbances, *ε*_{ij},
in the model equation. For the *p*-value to be correct,
these disturbances need to be independent, normally distributed, and
have constant variance. The standard ANOVA table has this form:

`anova2`

returns the standard ANOVA table as
a cell array with six columns.

Column | Definition |
---|---|

`Source` | The source of the variability. |

`SS` | The sum of squares due to each source. |

`df` | The degrees of freedom associated with each source. Suppose J is
the number of groups in the column factor, I is
the number of groups in the row factor, and R is
the number of replications. Then, the total number of observations
is IJR and the total degrees of freedom is IJR –
1. I – 1 is the degrees of freedom for the
row factor,J – 1 is the degrees of freedom
for the column factor, (I – 1)(J –
1) is the interaction degrees of freedom, and IJ(R –
1) is the error degrees of freedom. |

`MS` | The mean squares for each source, which is the ratio `SS/df` . |

`F` | F-statistic, which is the ratio of the mean
squares. |

`Prob>F` | The p-value, which is the probability that
the F-statistic can take a value larger than the
computed test-statistic value. `anova2` derives this
probability from the cdf of the F-distribution. |

The rows of the ANOVA table show the variability in the data that is divided by the source.

Row (Source) | Definition |
---|---|

`Columns` | Variability due to the column factor |

`Rows` | Variability due to the row factor |

`Interaction` | Variability due to the interaction of the row and column factors |

`Error` | Variability due to the differences between the data in each
group and the group mean (variability within groups) |

`Total` | Total variability |

[1] Wu, C. F. J., and M. Hamada. *Experiments:
Planning, Analysis, and Parameter Design Optimization*,
2000.

[2] Neter, J., M. H. Kutner, C. J. Nachtsheim,
and W. Wasserman. 4th ed. *Applied Linear Statistical Models*.
Irwin Press, 1996.

`anova1`

| `anova2`

| `anovan`

| `multcompare`