Which time series models uses the forecast error in the previous period as part of calculating the forecast for the next subsequent time period?

2022 Curriculum CFA Program Level II Quantitative Methods

Introduction

As financial analysts, we often use time-series data to make investment decisions. A time series is a set of observations on a variable’s outcomes in different time periods: the quarterly sales for a particular company during the past five years, for example, or the daily returns on a traded security. In this reading, we explore the two chief uses of time-series models: to explain the past and to predict the future of a time series. We also discuss how to estimate time-series models, and we examine how a model describing a particular time series can change over time. The following two examples illustrate the kinds of questions we might want to ask about time series.

Suppose it is the beginning of 2020 and we are managing a US-based investment portfolio that includes Swiss stocks. Because the value of this portfolio would decrease if the Swiss franc depreciates with respect to the dollar, and vice versa, holding all else constant, we are considering whether to hedge the portfolio’s exposure to changes in the value of the franc. To help us in making this decision, we decide to model the time series of the franc/dollar exchange rate. Exhibit 1 shows monthly data on the franc/dollar exchange rate. The data are monthly averages of daily exchange rates. Has the exchange rate been more stable since 1987 than it was in previous years? Has the exchange rate shown a long-term trend? How can we best use past exchange rates to predict future exchange rates?

As another example, suppose it is the beginning of 2020. We cover retail stores for a sell-side firm and want to predict retail sales for the coming year. Exhibit 2 shows monthly data on US retail sales. The data are not seasonally adjusted, hence the spikes around the holiday season at the turn of each year. Because the reported sales in the stores’ financial statements are not seasonally adjusted, we model seasonally unadjusted retail sales. How can we model the trend in retail sales? How can we adjust for the extreme seasonality reflected in the peaks and troughs occurring at regular intervals? How can we best use past retail sales to predict future retail sales?

Some fundamental questions arise in time-series analysis: How do we model trends? How do we predict the future value of a time series based on its past values? How do we model seasonality? How do we choose among time-series models? And how do we model changes in the variance of time series over time? We address each of these issues in this reading.

We first describe typical challenges in applying the linear regression model to time-series data. We present linear and log-linear trend models, which describe, respectively, the value and the natural log of the value of a time series as a linear function of time. We then present autoregressive time-series models—which explain the current value of a time series in terms of one or more lagged values of the series. Such models are among the most commonly used in investments, and the section addresses many related concepts and issues. We then turn our attention to random walks. Because such time series are not covariance stationary, they cannot be modeled using autoregressive models unless they can be transformed into stationary series. We therefore explore appropriate transformations and tests of stationarity. The subsequent sections address moving-average time-series models and discuss the problem of seasonality in time series and how to address it. We also cover autoregressive moving-average models, a more complex alternative to autoregressive models. The last two topics are modeling changing variance of the error term in a time series and the consequences of regression of one time series on another when one or both time series may not be covariance stationary.

Learning Outcomes

The member should be able to:

  1. calculate and evaluate the predicted trend value for a time series, modeled as either a linear trend or a log-linear trend, given the estimated trend coefficients;

  2. describe factors that determine whether a linear or a log-linear trend should be used with a particular time series and evaluate limitations of trend models;

  3. explain the requirement for a time series to be covariance stationary and describe the significance of a series that is not stationary;

  4. describe the structure of an autoregressive (AR) model of order p and calculate one- and two-period-ahead forecasts given the estimated coefficients;

  5. explain how autocorrelations of the residuals can be used to test whether the autoregressive model fits the time series;

  6. explain mean reversion and calculate a mean-reverting level;

  7. contrast in-sample and out-of-sample forecasts and compare the forecasting accuracy of different time-series models based on the root mean squared error criterion;

  8. explain the instability of coefficients of time-series models;

  9. describe characteristics of random walk processes and contrast them to covariance stationary processes;

  10. describe implications of unit roots for time-series analysis, explain when unit roots are likely to occur and how to test for them, and demonstrate how a time series with a unit root can be transformed so it can be analyzed with an AR model;

  11. describe the steps of the unit root test for nonstationarity and explain the relation of the test to autoregressive time-series models;

  12. explain how to test and correct for seasonality in a time-series model and calculate and interpret a forecasted value using an AR model with a seasonal lag;

  13. explain autoregressive conditional heteroskedasticity (ARCH) and describe how ARCH models can be applied to predict the variance of a time series;

  14. explain how time-series variables should be analyzed for nonstationarity and/or cointegration before use in a linear regression; and

  15. determine an appropriate time-series model to analyze a given investment problem and justify that choice.

Summary

  • The predicted trend value of a time series in period t is b‸0+b ‸1 t in a linear trend model; the predicted trend value of a time series in a log-linear trend model is eb‸0+ b‸1t.

  • Time series that tend to grow by a constant amount from period to period should be modeled by linear trend models, whereas time series that tend to grow at a constant rate should be modeled by log-linear trend models.

  • Trend models often do not completely capture the behavior of a time series, as indicated by serial correlation of the error term. If the Durbin–Watson statistic from a trend model differs significantly from 2, indicating serial correlation, we need to build a different kind of model.

  • An autoregressive model of order p, denoted AR(p), uses p lags of a time series to predict its current value: xt = b 0 + b 1 xt −1 + b 2 xt −2 + . . . + bpxt – p + ε t.

  • A time series is covariance stationary if the following three conditions are satisfied: First, the expected value of the time series must be constant and finite in all periods. Second, the variance of the time series must be constant and finite in all periods. Third, the covariance of the time series with itself for a fixed number of periods in the past or future must be constant and finite in all periods. Inspection of a nonstationary time-series plot may reveal an upward or downward trend (nonconstant mean) and/or nonconstant variance. The use of linear regression to estimate an autoregressive time-series model is not valid unless the time series is covariance stationary.

  • For a specific autoregressive model to be a good fit to the data, the autocorrelations of the error term should be 0 at all lags.

  • A time series is mean reverting if it tends to fall when its level is above its long-run mean and rise when its level is below its long-run mean. If a time series is covariance stationary, then it will be mean reverting.

  • The one-period-ahead forecast of a variable xt from an AR(1) model made in period t for period t + 1 is x‸t+1=b‸0+b‸1x t. This forecast can be used to create the two-period-ahead forecast from the model made in period t, x‸t+2=b ‸0+b‸1xt+1. Similar results hold for AR(p) models.

  • In-sample forecasts are the in-sample predicted values from the estimated time-series model. Out-of-sample forecasts are the forecasts made from the estimated time-series model for a time period different from the one for which the model was estimated. Out-of-sample forecasts are usually more valuable in evaluating the forecasting performance of a time-series model than are in-sample forecasts. The root mean squared error (RMSE), defined as the square root of the average squared forecast error, is a criterion for comparing the forecast accuracy of different time-series models; a smaller RMSE implies greater forecast accuracy.

  • Just as in regression models, the coefficients in time-series models are often unstable across different sample periods. In selecting a sample period for estimating a time-series model, we should seek to assure ourselves that the time series was stationary in the sample period.

  • A random walk is a time series in which the value of the series in one period is the value of the series in the previous period plus an unpredictable random error. If the time series is a random walk, it is not covariance stationary. A random walk with drift is a random walk with a nonzero intercept term. All random walks have unit roots. If a time series has a unit root, then it will not be covariance stationary.

  • If a time series has a unit root, we can sometimes transform the time series into one that is covariance stationary by first-differencing the time series; we may then be able to estimate an autoregressive model for the first-differenced series.

  • An n-period moving average of the current and past (n − 1) values of a time series, xt, is calculated as [xt + xt −1 + . . . + xt −( n −1)]/n.

  • A moving-average model of order q, denoted MA(q), uses q lags of a random error term to predict its current value.

  • The order q of a moving-average model can be determined using the fact that if a time series is a moving-average time series of order q, its first q autocorrelations are nonzero while autocorrelations beyond the first q are zero.

  • The autocorrelations of most autoregressive time series start large and decline gradually, whereas the autocorrelations of an MA(q) time series suddenly drop to 0 after the first q autocorrelations. This helps in distinguishing between autoregressive and moving-average time series.

  • If the error term of a time-series model shows significant serial correlation at seasonal lags, the time series has significant seasonality. This seasonality can often be modeled by including a seasonal lag in the model, such as adding a term lagged four quarters to an AR(1) model on quarterly observations.

  • The forecast made in time t for time t + 1 using a quarterly AR(1) model with a seasonal lag would be xt+1=b‸0+b ‸1xt+b‸2xt−3.

  • ARMA models have several limitations: The parameters in ARMA models can be very unstable; determining the AR and MA order of the model can be difficult; and even with their additional complexity, ARMA models may not forecast well.

  • The variance of the error in a time-series model sometimes depends on the variance of previous errors, representing autoregressive conditional heteroskedasticity (ARCH). Analysts can test for first-order ARCH in a time-series model by regressing the squared residual on the squared residual from the previous period. If the coefficient on the squared residual is statistically significant, the time-series model has ARCH(1) errors.

  • If a time-series model has ARCH(1) errors, then the variance of the errors in period t + 1 can be predicted in period t using the formula σ‸ t+12=a‸0+a‸1ε ‸t2.

  • If linear regression is used to model the relationship between two time series, a test should be performed to determine whether either time series has a unit root:

    • If neither of the time series has a unit root, then we can safely use linear regression.

    • If one of the two time series has a unit root, then we should not use linear regression.

    • If both time series have a unit root and the time series are cointegrated, we may safely use linear regression; however, if they are not cointegrated, we should not use linear regression. The (Engle–Granger) Dickey–Fuller test can be used to determine whether time series are cointegrated.

Which model uses past errors for forecasting?

Rather than using past values of the forecast variable in a regression, a moving average model uses past forecast errors in a regression-like model.

What is forecasting error in time series?

In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest.

Which model is used for time series forecasting?

AutoRegressive Integrated Moving Average (ARIMA) models are among the most widely used time series forecasting techniques: In an Autoregressive model, the forecasts correspond to a linear combination of past values of the variable.

Which one of the following is a measure for forecasting error?

Mean absolute deviation (MAD) is another commonly used forecasting metric. This metric shows how large an error, on average, you have in your forecast.