## Autocorrelation and Partial Autocorrelation

### What Are Autocorrelation and Partial Autocorrelation?

Autocorrelation is the linear dependence of a variable with itself at two points in time. For stationary processes, autocorrelation between any two observations only depends on the time lag h between them. Define Cov(yt, yt–h) = γh. Lag-h autocorrelation is given by

`${\rho }_{h}=Corr\left({y}_{t},{y}_{t-h}\right)=\frac{{\gamma }_{h}}{{\gamma }_{0}}.$`

The denominator γ0 is the lag 0 covariance, i.e., the unconditional variance of the process.

Correlation between two variables can result from a mutual linear dependence on other variables (confounding). Partial autocorrelation is the autocorrelation between yt and yt–h after removing any linear dependence on y1, y2, ..., yt–h+1. The partial lag-h autocorrelation is denoted ${\varphi }_{h,h}.$

### Theoretical ACF and PACF

The autocorrelation function (ACF) for a time series yt, t = 1,...,N, is the sequence ${\rho }_{h},$ h = 1, 2,...,N – 1. The partial autocorrelation function (PACF) is the sequence ${\varphi }_{h,h},$ h = 1, 2,...,N – 1.

The theoretical ACF and PACF for the AR, MA, and ARMA conditional mean models are known, and quite different for each model. The differences in ACF and PACF among models are useful when selecting models. The following summarizes the ACF and PACF behavior for these models.

Conditional Mean ModelACFPACF
AR(p)Tails off graduallyCuts off after p lags
MA(q)Cuts off after q lagsTails off gradually

### Sample ACF and PACF

Sample autocorrelation and sample partial autocorrelation are statistics that estimate the theoretical autocorrelation and partial autocorrelation. As a qualitative model selection tool, you can compare the sample ACF and PACF of your data against known theoretical autocorrelation functions [1].

For an observed series y1, y2,...,yT, denote the sample mean $\overline{y}.$ The sample lag-h autocorrelation is given by

`${\stackrel{^}{\rho }}_{h}=\frac{{\sum }_{t=h+1}^{T}\left({y}_{t}-\overline{y}\right)\left({y}_{t-h}-\overline{y}\right)}{{\sum }_{t=1}^{T}{\left({y}_{t}-\overline{y}\right)}^{2}}.$`

The standard error for testing the significance of a single lag-h autocorrelation, ${\stackrel{^}{\rho }}_{h}$, is approximately

`$S{E}_{\rho }=\sqrt{\left(1+2{\sum }_{i=1}^{h-1}{\stackrel{^}{\rho }}_{i}^{2}\right)/N}.$`

When you use `autocorr` to plot the sample autocorrelation function (also known as the correlogram), approximate 95% confidence intervals are drawn at $±2SE\rho$ by default. Optional input arguments let you modify the calculation of the confidence bounds.

The sample lag-h partial autocorrelation is the estimated lag-h coefficient in an AR model containing h lags, ${\stackrel{^}{\varphi }}_{h,h}.$ The standard error for testing the significance of a single lag-h partial autocorrelation is approximately $1/\sqrt{N-1}.$ When you use `parcorr` to plot the sample partial autocorrelation function, approximate 95% confidence intervals are drawn at $±2/\sqrt{N-1}$ by default. Optional input arguments let you modify the calculation of the confidence bounds.

## References

[1] Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.