Main Content

Rolling-Window Analysis of Time-Series Models

Rolling-window analysis of a time-series model assesses:

  • The stability of the model over time. A common time-series model assumption is that the coefficients are constant with respect to time. Checking for instability amounts to examining whether the coefficients are time-invariant.

  • The forecast accuracy of the model.

Rolling-Window Analysis for Parameter Stability

Suppose that you have data for all periods in the sample. To check the stability of a time-series model using a rolling window:

  1. Choose a rolling window size, m, i.e., the number of consecutive observation per rolling window. The size of the rolling window will depend on the sample size, T, and periodicity of the data. In general, you can use a short rolling window size for data collected in short intervals, and a larger size for data collected in longer intervals. Longer rolling window sizes tend to yield smoother rolling window estimates than shorter sizes.

  2. Suppose that the number of increments between successive rolling windows is 1 period, then partition the entire data set into N = Tm + 1 subsamples. The first rolling window contains observations for period 1 through m, the second rolling window contains observations for period 2 through m + 1, and so on.

    There are variations on the partitions, e.g., rather than roll one observation ahead, you can roll four observations for quarterly data.

  3. Estimate the model using each rolling window subsamples.

  4. Plot each estimate and point-wise confidence intervals (i.e., θ^±2[SE^(θ^)]) over the rolling window index to see how the estimate changes with time. You should expect a little fluctuation for each, but large fluctuations or trends indicate that the parameter might be time varying.

For more details on assessing the stability of a model using rolling window analysis, see [1].

Rolling Window Analysis for Predictive Performance

Suppose that you have data for all periods in the sample. You can backtest to check the predictive performance of several time-series models using a rolling window. These steps outline how to backtest.

  1. Choose a rolling window size, m, i.e., the number of consecutive observation per rolling window. The size of the rolling window depends on the sample size, T, and periodicity of the data. In general, you can use a short rolling window size for data collected in short intervals, and a larger size for data collected in longer intervals. Longer rolling window sizes tend to yield smoother rolling window estimates than shorter sizes.

  2. Choose a forecast horizon, h. The forecast horizon depends on the application and periodicity of the data. The following illustrates how the rolling window partitions the data set.

  3. If the number of increments between successive rolling windows is 1 period, then partition the entire data set into N = Tm + 1 subsamples. The first rolling window contains observations for period 1 through m, the second rolling window contains observations for period 2 through m + 1, and so on. The figure illustrates the partitions.

    Illustration of aforementioned Rolling Window sample size=m, subsample=h, and Rolling Window 1, Rolling Window 2, Rolling Window 3, Rolling Window T-M-2, and Rolling Window T-M-1.

    There are variations on the partitions, e.g., rather than roll one observation ahead, you can roll four observations for quarterly data.

  4. For each rolling window subsample:

    1. Estimate each model.

    2. Estimate h-step-ahead forecasts.

    3. Compute the forecast errors for each forecast, that is enj=ymh+n+jy^nj, where:

      • enj is the forecast error of rolling window n for the j-step-ahead forecast.

      • y is the response.

      • y^nj is the j-step-ahead forecast of rolling window subsample n.

  5. Compute the root forecast mean squared errors (RMSEs) using the forecast errors for each step-ahead forecast type. In other words,

    RMSEj=n=1Nenj2n for j=1,...,h.

  6. Compare the RMSEs among the models. The model with the lowest set of RMSEs has the best predictive performance.

For more details on backtesting, see [1].

References

[1] Zivot, E., and J. Wang. Modeling Financial Time Series with S_PLUS®. 2nd ed. NY: Springer Science+Business Media, Inc., 2006.

Related Topics