Accepted for publication at the Journal of Money, Credit, and Banking
We investigate the evidence for structural breaks in autoregressive models of U.S. macroeconomic time series. There is substantial model uncertainty associated with such models, including uncertainty related to lag selection, the number of structural breaks, and the specific parameters that break. We develop a feasible approach to Bayesian Model Averaging, where the model space encompasses these sources of uncertainty. We find pervasive evidence for breaks in variance parameters, and for price inflation series we find strong evidence of changes in persistence. We also find evidence for reductions in trend growth rates of production series. For most series there is substantial model uncertainty, calling into question the common practice of basing inference on one selected structural break model.
This paper investigates the informational content of regular revisions to real GDP growth and its components. We perform a real-time forecasting exercise for the advance estimate of real GDP growth using dynamic regression models that include revisions to GDP and its components. Echoing other work in the literature, we find little evidence that including aggregate GDP growth revisions improves forecast accuracy relative to an AR(1) baseline model; however, models that include revisions to components of GDP improve forecast accuracy. The first revision to consumption is particularly relevant in that every model that includes the revision outperforms the baseline model. Measured by root mean squared forecasting error (RMSFE), improvements are quite sizable, with many models increasing forecasting performance by 5% or more, and with top-performing models forecasting 0.24 percentage points closer to the advance estimate of growth. We use Bayesian model averaging to underscore that our results are driven by the informational content of revisions. The posterior probability of models with the first revision to consumption is significantly higher than our baseline model, despite strong priors that the latter should be the preferred forecasting model.
This paper investigates the nature of the Federal Open Market Committee's (FOMC's) interest rate rule, with a focus on which variables have been relevant to the FOMC over the past 40 years. I consider a large number of potential variables, including alternate measures of inflation, aggregate real activity, and sectoral variables. Based on inclusion probabilities derived from Bayesian Model Averaging (BMA) over a sample from 1970-2007, I find that the FOMC responds to changes in unemployment rather than to changes in GDP growth. Additionally, I find that the FOMC reacts not only to inflation and aggregate output, but also to measures of sectoral activity, such as changes in commodity prices. Finally, I find that using BMA improves out-of-sample forecasting performance over baseline Taylor-type interest rate rules.
When studying the Federal Open Market Committee’s (FOMC’s) interest rate rule, some authors, such as Gonzalez-Astudillo (2018), find evidence for changes in inflation and output gap responses. Others, such as Sims and Zha (2006), only find evidence for a change in the variance of the interest rate rule. In this paper, I develop a new two-regime Markov-switching model that probabilistically performs variable selection and identification of parameter change for each variable in the model. I find substantial evidence that there have been changes in the response to unemployment and in the volatility of the rule. When the FOMC responds strongly to the unemployment, I find a bi-modal density for the inflation response coefficient. Despite the bi-modal density, there is a low probability that there have been changes in the FOMC’s response to inflation.
We compare the effectiveness of Classical, Bayesian, and Machine Learning (ML) methods for predicting the presence of a unit root in univariate time-series models. Framing the issue as a classification problem, we demonstrate how ML may be used to uncover structural features of a macroeconomic time series with small data. We use a Monte Carlo approach to evaluate the predictions from these approaches and find that ML outperforms both the Classical and Bayesian tests using prediction accuracy, and appears to be the most flexible for classifying unit roots when class imbalance is present. In data, we find broad consensus among the approaches for predicted nonstationary series, with some disagreement for predicted stationary series.
Works in Progress
Break or No Break? Identifying Structural Breaks using Classical, Bayesian, and Machine Learning Approaches (with Yamin Ahmad and Ming Chien Lo)
What Predicts Informality? A Bayesian Model Averaging Approach (with Tyler Schipper)