Estimation and Model Specification for Econometric

2015-3
Manuel Sebastian Lukas
PhD Thesis
Estimation and Model Specification for
Econometric Forecasting
DEPARTMENT OF ECONOMICS AND BUSINESS
AARHUS UNIVERSITY DENMARK
E STIMATION AND M ODEL S PECIFICATION FOR
E CONOMETRIC F ORECASTING
By Manuel Sebastian Lukas
A PhD thesis submitted to
School of Business and Social Sciences, Aarhus University,
in partial fulfilment of the requirements of
the PhD degree in
Economics and Business
August 2014
CREATES
Center for Research in Econometric Analysis of Time Series
P REFACE
This dissertation was written in the period from September 2010 to August 2014
while I was enrolled as a PhD student at the Department of Economics and Business
at Aarhus University. During my PhD studies I was affiliated with the Center for
Research in Econometric Analysis of Time Series (CREATES), that is funded by the
Danish National Research Foundation. I am grateful to the Department of Economics
and Business and to CREATES for providing an inspiring, supportive, and friendly
research environment, and for the financial support for attending conferences and
courses.
Parts of this dissertation were written during my research stay at the Rady School
of Management at the University of California San Diego (UCSD) from August 2012
to January 2013. I thank Allan Timmermann for making this academically, professionally, and personally rewarding experience possible and I thank the Rady School
of Management of the hospitality. I am grateful to the Aarhus University Research
Foundation (AUFF) and the Department of Business and Social Science at Aarhus
University for their financial support in connection for my stay at UCSD. I thank Jack
Zhang for all the help and the hospitality during my stay in the United States, and for
introducing me to the UCSD graduate student life.
I am thankful to all people who have supported me in my research with their
advice, comments, and suggestions. My main supervisor Bent Jesper Christensen and
my co-supervisor Eric Hillebrand have supported me with guidance, expertise, and
encouragement for both my independent research and our joint research projects.
I wish to thank all fellow PhD students at Aarhus University for the excellent team
spirit, both in academic and in (very) non-academic matters, which has made the past
four years a great experiences. I especially wish to thank Rasmus, Heida, Kasper O.,
Andreas, and Anders L., who have accompanied me in the challenging and exciting
transition from Master’s to PhD student. During my PhD studies if have enjoyed
many welcome breaks from research during coffee breaks, social events, and floorball
matches with many of my colleagues, in particular Niels S., Juan Carlos, Anne F.,
Jonas E., Jonas M., Martin S., Niels H., Mark, Simon, Rune, Anders K., Laurent, Stine,
Morten, and Christina. A big thanks goes to Johannes for sharing the LATEX template
that is used for this dissertation.
I am very grateful to CREATES, especially the Center Director Niels Haldrup and
i
ii
the Center Administrator Solveig Sørensen, for creating a great research environment
and for the many interesting PhD courses that were organized by CREATES during
my studies. I also wish to thank Niels Haldrup for allowing me to participate three
times in the Econometric Game in Amsterdam for Team Aarhus University.
I am indebted to my family and friends in Switzerland for their patience, their
visits to Denmark, and their amazing hospitality on my visits back home. Last but
not least, I am grateful to my girlfriend Tanja for supporting me during the busy and
challenging time as PhD student.
Manuel Sebastian Lukas
Aarhus, August 2014
U PDATED P REFACE
The predefence took place on September 30, 2014. The assessment committee consists of Asger Lunde, Aarhus University, Allan Timmermann, University of California,
San Diego, and Christian Møller Dahl, University of Southern Denmark. I wish to
thank the members of the committee for their detailed comments. After the predefence the dissertation has been revised to incorporate the changes required by the
committee. Additionally, the committee has suggested improvements, some of which
are incorporated in this revised version of the thesis.
Manuel Sebastian Lukas
Copenhagen, January 2015
iii
C ONTENTS
Summary
vii
1
.
.
.
.
.
.
.
1
2
4
12
19
22
27
29
2
3
Bagging Weak Predictors
1.1 Introduction . . . . . . . . . . . . . . . .
1.2 Bagging Predictors . . . . . . . . . . . .
1.3 Monte Carlo Simulations . . . . . . . . .
1.4 Application to CPI Inflation Forecasting
1.5 Conclusion . . . . . . . . . . . . . . . . .
1.6 References . . . . . . . . . . . . . . . . .
1.7 Appendix . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Return Predictability, Model Uncertainty, and Robust Investment
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Investment and Confidence Sets . . . . . . . . . . . . . . . . .
2.3 Models and Data . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
34
36
40
43
56
57
Frequency Dependence in the Risk-Return Relation
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .
3.2 The Empirical Risk-Return Relation . . . . . . . . .
3.3 Frequency Dependence in the Risk-Return Relation
3.4 Frequency-Dependent Real-Time Forecasts . . . .
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . .
3.6 References . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
62
65
72
83
88
90
v
. . .
. . .
. .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
S UMMARY
This dissertation comprises three self-contained chapters with the theme of econometric forecasting as their common denominator. We analyze methods for parameter
estimation and model specification of econometric models and apply these methods
to macroeconomic and financial time series. Turning to econometric forecasting we
shift the focus of econometric modeling from fitting all available data, testing for
statistical significance, and testing for correct specification towards fitting future data,
i.e., achieving good out-of-sample performance. Applying the classical econometric
toolbox for parameter estimation and model specification is not always appropriate
for forecasting because a statistically significant relation and good in-sample fit are
insufficient to ensure satisfactory forecasting performance. It is therefore important
to take into account the very aim of out-of-sample forecasting at the time when the
model is estimated and specified. The three chapters in this dissertation each deal
with some aspects of estimation and model specification for econometric forecasting with empirical applications to inflation rates, equity premia, and the risk-return
relation.
The first chapter, "Bagging Weak Predictors", is joint work with Eric Hillebrand.
We propose a new bootstrap aggregation, bagging, predictor for situations where the
predictive relation is weak, i.e., for situations in which predictors based on classical
statistical methods fail to provide good forecasts because the estimation variance is
larger than the bias effect from ignoring the relation. In the literature on econometric
forecasting, it is often found that predictors suggested by economic theory do not
lead to satisfactory forecasting results. Successful forecasting with such predictors
requires prediction methods that reduce estimation variance. The bagging method of
Breiman (1996) is based on bootstrap re-sampling and it can improve the properties
of pre-test and other hard-threshold estimators by reducing the estimation variance.
Standard bagging estimators are based on standard t -tests for statistical significance.
A statistically significant relation is, however, not sufficient for successful out-ofsample forecasting. We therefore base our new bagging predictor on the in-sample
test for predictive ability proposed by Clark and McCracken (2012). The null hypothesis of this test is that the inclusion or the exclusion of a predictor in a forecasting
regression leads to equal forecasting performance. Thus, when the test is rejected,
we know whether or not to include the predictor. By using the test of Clark and Mc-
vii
viii
S UMMARY
Cracken (2012), our predictor shrinks the regression coefficient estimate not to zero,
but towards the null of the test which equates squared bias with estimation variance.
We derive the asymptotic distribution in the asymptotic framework of Bühlmann and
Yu (2002) and show that the predictor has a substantially lower the mean-squared
error (MSE) compared to standard t -test bagging if a weak predictive relationship
exists. Because the bootstrap re-sampling for bagging can be computationally heavy,
we derive an asymptotic shrinkage representation for the predictor that simplifies
computation of the estimator. Monte Carlo simulations show that our predictor works
well in small samples. In the empirical application, we consider forecasting inflation
using employment and industrial production in the spirit of the so-called Phillips
Curve. This application fits our framework because inflation is notoriously hard to
forecast from other macroeconomic variables.
In the second chapter, "Return Predictability, Model Uncertainty, and Robust
Investment", the model uncertainty in stock return prediction models is analyzed.
Empirical evidence suggests that stock returns are not completely unpredictable, see,
e.g., Lettau and Ludvigson (2010) for a comprehensive survey. Under stock return
predictability, investment decisions are based on conditional expectations of stock
returns. The choice of appropriate predictor variables is, however, subject to great
uncertainty. In this chapter, we use the model confidence set approach of Hansen,
Lunde, and Nason (2011) to quantify the uncertainty about expected utility from
stock market investment, accounting for potential return predictability, for monthly
data over the sample period 1966:01–2002:12 on the US stock market. We consider the
popular data set of Welch and Goyal (2008), which contains standard predictor variables used in this literature. For the econometric analysis we take the perspective of a
small investor with constant relative risk aversion (CRRA) utility and short-selling constraints. The model confidence set is then applied recursively and, for every month in
the out-of-sample period, it identifies the set of models that contains the best model
with a given confidence level. The empirical results show that the model confidence
sets imply economically large and time-varying uncertainty about expected utility
from investment. To analyze the economic importance of this model uncertainty we
propose investment strategies that reduce the impact of model uncertainty. Reducing
the model uncertainty with these strategies requires lower investment in stocks, but
return predictability still leads to economic gains for the small investor. Thus, we
conclude that although model uncertainty concerns reduce the share of wealth that
investors wish to hold in stocks, it does not prevent them from benefiting from return
predictability using econometric models.
The third chapter, "Frequency Dependence in the Risk-Return Relation", is coauthored with Bent Jesper Christensen and considers a specification of the risk-return
relation that allows for non-linearities in the form of frequency dependence. The
risk-return relation is typically specified as a linear relation between stock returns and
some measure of the conditional variance, motivated by the intertemporal capital
ix
asset pricing model (ICAPM) of Merton (1973). Since the empirical analysis in Merton
(1980), empirical estimation of the risk-return relation has attracted much attention
in the literature. In this chapter we use the band spectral regression of Engle (1974)
with the one-sided filtering approach of Ashley and Verbrugge (2008) to allow for
frequency dependence in the risk-return relation, which is a feature that cannot
be accommodated by a linear model. The combination of one-sided filtering and
conditional variances constructed from lagged observations make our estimation
approach robust to contemporaneous leverage and feedback effects. For daily returns
and realized variances from high-frequency intra-daily data on the S&P 500 from
1995 to 2012 we strongly reject the null hypothesis of no frequency dependence.
This finding is robust to changes in the conditional variance proxy. In particular, the
rejection of the null hypothesis is strongest when we allow for lagged leverage effects
in the conditional variance. Although the risk-return relation is positive on average
over all frequencies, we find a large and statistically significant negative coefficient
for periods of around one week. Subsample analysis reveals that the negative effect
at these frequencies is not statistically significant before the financial crisis, but
becomes very strong after July 2007. Accounting for the frequency dependence in the
risk-return relation can improve the out-of-sample forecasting of stock returns after
2007, but only if the forecasting approach reduces in increased estimation variance
from the additional parameters of the band spectral approach.
References
Ashley, R., Verbrugge, R. J., 2008. Frequency dependence in regression model coefficients: An alternative approach for modeling nonlinear dynamic relationships in
time series. Econometric Reviews 28 (1-3), 4–20.
Breiman, L., 1996. Bagging predictors. Machine Learning 24, 123–140.
Bühlmann, P., Yu, B., 2002. Analyzing bagging. The Annals of Statistics 30 (4), 927–961.
Chernov, M., Gallant, R., Ghysels, E., Tauchen, G., 2003. Alternative models for stock
price dynamics. Journal of Econometrics 116 (1), 225–257.
Clark, T. E., McCracken, M. W., 2012. In-sample tests of predictive ability: A new
approach. Journal of Econometrics 170 (1), 1–14.
Corsi, F., 2009. A simple approximate long-memory model of realized volatility. Journal of Financial Econometrics 7 (2), 174–196.
Corsi, F., Reno, R., 2012. Discrete-time volatility forecasting with persistent leverage
effect and the link with continuous-time volatility modeling. Journal of Business
and Economic Statistics, 46–78.
x
S UMMARY
Engle, R. F., 1974. Band spectrum regression. International Economic Review 15 (1),
1–11.
Gouriéroux, C., Monfort, A., Renault, E., 1993. Indirect inference. Journal of Applied
Econometrics 8 (S1), S85–S118.
Hansen, P. R., Lunde, A., Nason, J. M., 3 2011. The model confidence set. Econometrica
79 (2), 453–497.
Lettau, M., Ludvigson, S., 2010. Measuring and modeling variation in the risk- return
tradeoff. In: Ait-Sahalia, Y., Hansen, L.-P. (Eds.), Handbook of Financial Econometrics. Vol. 1. Elsevier Science B.V., North Holland, Amsterdam, pp. 617–690.
Merton, R. C., 1973. An intertemporal capital asset pricing model. Econometrica,
867–887.
Merton, R. C., 1980. On estimating the expected return on the market: An exploratory
investigation. Journal of Financial Economics 8 (4), 323–361.
Welch, I., Goyal, A., 2008. A comprehensive look at the empirical performance of
equity premium prediction. Review of Financial Studies 21 (4), 1455–1508.
CHAPTER
1
B AGGING W EAK P REDICTORS
Manuel Lukas and Eric Hillebrand
Aarhus University and CREATES
Abstract
Relations between economic variables can often not be exploited for forecasting,
suggesting that predictors are weak in the sense that estimation uncertainty is larger
than bias from ignoring the relation. In this chapter, we propose a novel bagging
predictor designed for such weak predictor variables. The predictor is based on an
in-sample test for predictive ability. Our predictor shrinks the OLS estimate not to
zero, but towards the null of the test which equates squared bias with estimation
variance. We derive the asymptotic distribution and show that the predictor can
substantially lower the MSE compared to standard t -test bagging. An asymptotic
shrinkage representation for the predictor is obtained that simplifies computation of
the estimator. Monte Carlo simulations show that the predictor works well in small
samples. In an empirical application we apply the new predictor to inflation forecasts.
Keywords: Inflation forecasting, bootstrap aggregation, estimation uncertainty, weak
predictors.
1
2
C HAPTER 1. B AGGING W EAK P REDICTORS
1.1 Introduction
A frequent finding in pseudo out-of-sample forecasting exercises is that including
predictor variables does not improve forecasting performance, even though the predictor variables are significant in in-sample regressions. For example, there is a large
literature on forecast failure with economic predictor variables for forecasting inflation (see, e.g., Atkeson and Ohanian, 2001; Stock and Watson, 2009) and forecasting
exchange rates (see, e.g., Meese and Rogoff, 1983; Cheung, Chinn, and Pascual, 2005).
Including predictor variables suggested by economic theory, or selected by in-sample
regressions, typically does not help to consistently out-perform simple time series
models across different sample splits and model specifications. Forecasting failure
can be attributed to estimation variance and parameter instability. In this chapter, we
focus exclusively on the former. These two causes of forecast failure are, however, often interrelated in practice. If we are unwilling to specify the nature of instability, it is
common practice to use a short rolling window for estimation to deal with parameter
instability. While a short estimation window can better adapt to changing parameters,
it increases estimation variance compared to using all data. In this sense, estimation
variance can result from the attempt to accommodate parameter instability, such
that our results are relevant for both kinds of forecast failure.
This chapter is concerned with reducing estimation variance by bagging pre-test
estimators when predictor variables have weak forecasting power. Modeling weak
predictors in the framework of Clark and McCracken (2012) leads to a non-vanishing
bias-variance trade-off. CM propose an in-sample test for predictive ability, i.e., a
test of whether bias reduction or estimation variance will prevail when including a
predictor variable. Based on this test, we propose a novel bagging estimator that is
designed to work well for predictors with non-zero coefficient of known sign. Under
the null of the CM-test, the parameter is not equal to zero, but equal to a value
for which squared bias from omitting the predictor variable is equal to estimation
variance. In our bagging scheme, we set the parameter equal to this value instead of
zero whenever we fail to reject the null. For this, knowledge of the coefficient’s sign
is necessary. We derive the asymptotic distribution of the estimator and show that
for a wide range of parameter values, asymptotic mean-squared error is superior to
bagging a standard t -test. The improvements can be substantial and are not sensitive
to the choice of the critical value, which is a remaining tuning parameter. We obtain
forecast improvements if the data-generating parameter is small but non-zero. If the
data-generating parameter is indeed zero, however, our estimator has a large bias
and is therefore imprecise.
Bootstrap aggregation, bagging, was proposed by Breiman (1996) as a method
to improve forecast accuracy by smoothing instabilities from modeling strategies
that involve hard-thresholding and pre-testing. With bagging, the modeling strategy
is applied repeatedly to bootstrap samples of the data, and the final prediction is
obtained by averaging over the predictions from the bootstrap samples. Bühlmann
1.1. I NTRODUCTION
3
and Yu (2002) show theoretically how bagging reduces variance of predictions and
can thus lead to improved accuracy. Stock and Watson (2012) derive a shrinkage
representation for bagging a hard-threshold variable selection based on the t -statistic.
This representation shows that standard t -test bagging is asymptotically equivalent
to shrinking the unconstrained coefficient estimate to zero. The degree of shrinkage
depends on the value of the t -statistic.
Bagging is becoming a standard forecasting technique for economic and financial
variables. Inoue and Kilian (2008) consider different bagging strategies for forecasting
US inflation with many predictors, including bagging a factor model where factors are
included if they are significant in a preliminary regression. They find that forecasting
performance is similar to other forecasting methods such as shrinkage methods
and forecast combination. Rapach and Strauss (2010) use bagging to forecast US
unemployment changes with 30 predictors. They apply bagging to a pre-test strategy
that uses individual t -statistics to select variables, and find that this delivers very
competitive forecasts compared to forecast combinations of univariate benchmarks.
Hillebrand and Medeiros (2010) apply bagging to lag selection for heterogeneous
autoregressive models of realized volatility, and they find that this method leads to
improvements in forecast accuracy.
Our method requires a sign restriction in order to impose the null. We focus on
a single predictor variable, because in this case, intuition and economic theory can
be used to derive sign restrictions. For models with multiple correlated predictors,
sign restrictions are harder to justify. In the literature, bagging has been applied for
reducing variance from imposing sign restrictions on parameters. A hard-threshold estimator with sign restriction sets the estimate to zero if the sign restriction is violated.
Gordon and Hall (2009) consider bagging the hard-threshold estimator and show
analytically that bagging can reduce variance. Sign restrictions arise naturally in predicting the equity premium, see Campbell and Thompson (2008) for a hard-threshold,
and Pettenuzzo, Timmermann, and Valkanov (2013) for a Bayesian approach. Hillebrand, Lee, and Medeiros (2013) analyze the bias-variance trade-off from bagging
positive constraints on coefficients and the equity premium forecast itself, and they
find empirically that bagging helps improving the forecasting performance.
The remainder of the chapter is organized as follows. In Section 1.2, the bagging
estimator for weak predictors is presented and asymptotic properties are analyzed.
Monte Carlo results for small samples are presented in Section 1.3. In Section 1.4, the
estimator is applied to CPI inflation forecasting using the unemployment rate and
industrial production as predictors. Concluding remarks are given in Section 1.5.
4
C HAPTER 1. B AGGING W EAK P REDICTORS
1.2 Bagging Predictors
Let y be the target variable we wish to forecast h-steps ahead, for example consumer
price inflation. The variables x is a potential predictor variable that can be used to
forecast the target variable y. Let T be the sample size. At time t , we forecast y t +h,T
using the scalar variable x t as predictor and a model estimated on the available data.
In our framework we consider the simple regression relation
y t +h,T = µ + βT x t + u t +h ,
(1.1)
that is used to obtain h-steps ahead forecasts of the variable y, and where βT is a
coefficient that depends on the sample size to reflect a weak predictive relation.
The focus of our analysis is estimation of the coefficient βT of the predictor
variable x. We start with the following assumptions regarding the unrestricted leastsquares estimate of the coefficient, β̂T , and the estimator of its asymptotic variance,
σ̂2∞,T . To reduce notational clutter we suppress the dependence of the asymptotic
variance on the fixed forecast horizon h.
Assumption 1.1
d
T 1/2 (β̂T − βT ) −
→ N (0,σ2∞ ),
(1.2)
p
and let σ̂2∞,T > 0 be a consistent estimator of σ2∞ < ∞, i.e., σ̂2∞,T − σ2∞ −
→ 0.
Given the asymptotic variance from Assumption 1.1, we analyze weak predictors
by considering the following parameterization,
βT = T −1/2 bσ∞ ,
(1.3)
where we assume that the sign of b is known. Without loss of generality, we assume
that b is strictly positive, i.e., si g n(b) = 1.
For a given sample of length T and a given forecast horizon h, we start with
considering two forecasting models, the unrestricted model (UR) that includes the
predictor variable x t and the restricted model (RE) that contains only an intercept.
UR
0
Let µ̂RE
T and (µ̂T , β̂T ) be the OLS parameter estimates from the restricted model and
the unrestricted model, respectively. The forecasts for y t +h,T from the unrestricted
and restricted models are denoted
R
UR
ŷ U
t +h,T = µ̂T + β̂T x t ,
(1.4)
RE
ŷ tRE
+h,T = µ̂T ,
(1.5)
and
respectively.
In practice, we are often not certain whether to include the weak predictor x t in
the forecast model or not, i.e., whether RE or UR yields more accurate forecasts. In
1.2. B AGGING P REDICTORS
5
such a situation, it is common to use a pre-test estimator. Typically, the t -statistic
τ̂T = T 1/2 β̂T σ̂−1
∞,T is used to decide whether or not to include the predictor variable.
Let I(.) denote the indicator function that takes value 1 if the argument is true and 0
otherwise. The one-sided pre-test estimator is
β̂PT T = β̂T I(τ̂T > c),
(1.6)
for some critical value c, for example 1.64 for a one-sided test at the 5% level. We
focus on one-sided testing because we assumed that the sign of β is known.
The hard-threshold indicator function involved in the pre-test estimator introduces estimation uncertainty, and it is not well designed to improve forecasting performance. Bootstrap aggregation (bagging) can be used to smooth the hard-threshold
and thereby improve forecasting performance (see Bühlmann and Yu, 2002; Breiman,
1996). The bagging version of the pre-test estimator is defined as
β̂BG
T =
B
1 X
β̂∗ I(τ̂∗ > c),
B b=1 b b
(1.7)
where β̂∗b and τ̂∗b are calculated from bootstrap samples, and B is the number of
bootstrap replications.
The bagging estimator and the underlying t -statistic pre-test estimator are based
on a test for β = 0. We use the estimated value of the coefficient, β̂T , if this null
hypothesis can be rejected at some pre-specified significance level, e.g., 5%. However,
this test does not directly address the actual question of the model selection decision,
i.e., whether or not the coefficient can be estimated accurately enough to be useful
for forecasting for the given sample size. Rather, it is a test for whether the coefficient
is zero or not.
Clark and McCracken (2012) (CM henceforth) propose an asymptotic in-sample
test for predictive ability for weak predictors to test whether estimation uncertainty
outweighs the predictive power of a predictor in terms of mean-squared error. The
null hypothesis equates asymptotic estimation variance and squared bias. In terms of
squared bias and variance of the estimator of the coefficient βT , this null hypothesis
becomes
H0,C M : lim T E[(βT )2 ] = lim T E[(β̂T − βT )2 ].
(1.8)
T →∞
T →∞
Under Assumption 1.1 and the parameterization (1.3), we have that the null hypothesis H0,C M is true for b 2 σ2∞ = σ2∞ . Thus the null hypothesis is true for b = 1 as we have
assumed that b is positive. Looking at the distribution of the t -test statistic under the
null hypothesis H0,C M , Assumption 1.1, and using Equation (1.3), we get
d
1/2
1/2 −1/2
T 1/2 β̂T σ̂−1
(β̂T − βT )σ̂−1
T
bσ∞ σ̂−1
→ N (0,1) + 1.
∞,T = T
∞,T + T
∞,T −
(1.9)
The distribution under the null is a non-central distribution. This non-central asymptotic distribution is used to obtain critical values c̃ for the t -statistic under the hypothesis H0,C M following Clark and McCracken (2012) .
6
C HAPTER 1. B AGGING W EAK P REDICTORS
The asymptotic distribution is non-central, because under the null hypothesis
the coefficient is not zero. The critical values c̃ are different than for the standard
significance test and depend on the sign of b (see Clark and McCracken, 2012, for details). More importantly, imposing the null hypothesis of the CM-test is not achieved
by setting β = 0. Therefore we cannot set β = 0 if the CM-test does not reject the null
hypothesis. Instead, we impose this null hypothesis, which can be achieved by setting
the coefficient to an estimate of the asymptotic variance,
q
q
β0,C M = var[β̂T ] = T −1 σ̂2∞,T = T −1/2 σ̂∞,T .
(1.10)
Note that we utilized the sign restriction on b to identify the sign of β0,C M under the
null.
This results in the following pre-test estimator based on the CM-test, which we
call CMPT (Clark-McCracken Pre-Test).
β̂CT M P T = β̂T I(τ̂T > c̃) + T −1/2 σ̂∞,T I(τ̂T ≤ c̃),
(1.11)
where, for the same confidence level, the critical value c̃ is different from the critical value c used in the standard pre-test estimator (1.6), because the asymptotic
distributions of the test statistics differ.
The bagging version of the CMPT estimator (1.11), henceforth called CMBG, is
defined as
i
B h
1 X
β̂CT M BG =
β̂∗b I(τ̂∗b > c̃) + T −1/2 σ̂∞,T I(τ̂∗b ≤ c̃) .
(1.12)
B b=1
The first term in the sum is exactly the standard bagging estimator, except for the
different critical values. The critical values for C M BG come from the normal distribution N (1,1), while critical values for standard bagging come from the standard
normal distribution. The second term in the sum of Equation (1.12) stems from the
cases where the null is not rejected for bootstrap replication b. Note that we do not
re-estimate the variance under the null, σ̂2∞,T , for every bootstrap sample. The main
reason to apply bagging are hard-thresholds, which are not involved in the estimation
of σ̂2∞,T , such that there is no obvious reason for bagging the variance estimator.
1.2.1 Asymptotic Distribution and Mean-Squared Error
We have proposed an estimator that is based on the CM-test and better reflects our
goal of improving forecast accuracy rather than testing statistical significance. In this
section, we derive the asymptotic properties of this estimator to see if, and for which
parameter configurations, this estimator indeed improves the asymptotic meansquared error (AMSE). The asymptotic distribution for bagging estimators has been
analyzed for bagging t -tests by Bühlmann and Yu (2002), and for sign restrictions by
Gordon and Hall (2009). The following assumption on the bootstrapped least-squares
estimator β̂∗T is needed for the analysis of the bagging estimators.
1.2. B AGGING P REDICTORS
7
Assumption 1.2 (Bootstrap consistency)
sup |P∗ [T 1/2 (β̂∗T − β̂T ) ≤ v] − Φ(v/σ∞ )| = o p (1),
(1.13)
v∈R
where P∗ is the bootstrap probability measure.
In Assumption 1.2 we assume that the bootstrap distribution converges to the
asymptotic distribution of the CLT in Assumption 1.1. Under Assumption 1.2, with
a local-to-zero coefficient given by model (1.3), Bühlmann and Yu (2002) derive
the asymptotic distribution for two-sided versions of the pre-test and the bagging
estimators. The one-sided versions considered in this chapter follow immediately as
special cases. Let φ(.) denote the pdf and Φ(.) the cdf of a standard normal variable.
Proposition 1.1 (Special case of Bühlmann and Yu (2002), Proposition 2.2)
Under Assumption 1.1, and model (1.3)
PT
T 1/2 σ̂−1
∞,T β̂T
d
(Z + b)I(Z + b > c),
(1.14)
(Z + b)Φ(Z + b − c) + φ(Z + b − c),
(1.15)
−
→
and, with additionally Assumption 1.2,
BG
T 1/2 σ̂−1
∞,T β̂T
d
−
→
where Z is a standard normal random variable.
The proposition follows immediately from Bühlmann and Yu (2002). The asymptotic distributions depend on the predictor strength b and the critical value c. For
the pre-test estimator, the indicator function enters the asymptotic distribution. The
distribution of the bagging estimator, on the other hand, contains smooth functions
of b and c. Bühlmann and Yu (2002) show how this can reduce the variance of the
estimator substantially for certain values of b and c. We adapt this proposition to
derive the asymptotic distributions of the estimators CMPT, given by Equation (1.11),
and CMBG, given by Equation (1.12).
Proposition 1.2
Under Assumption 1.1, and model (1.3)
d
C MPT
T 1/2 σ̂−1
−
→ (Z + b)I(Z + b > c̃) + I(Z + b ≤ c̃),
∞,T β̂T
(1.16)
and, with additionally Assumption 1.2,
d
C M BG
T 1/2 σ̂−1
−
→ (Z + b)Φ(Z + b − c̃) + φ(Z + b − c̃) + 1 − Φ(Z + b − c̃),
∞,T β̂T
(1.17)
where Z is a standard normal variable.
The proof of the proposition is given in the appendix. The asymptotic distributions are similar to those of the pre-test and bagging estimators (BG and PT), but
8
C HAPTER 1. B AGGING W EAK P REDICTORS
involve extra terms due to the different null hypothesis. For CMPT, the extra term is
simply an indicator function, and for CMBG it involves the standard normal cdf Φ(·).
Figures 1.1 and 1.2 show asymptotic mean-squared error, asymptotic bias, asymptotic squared bias, and asymptotic variance of the pre-test and bagging estimators
for test levels 5% and 1%, respectively. Note that the t -test and the CM-test use different critical values, c and c̃. The results for the two different significance levels,
5% and 1%, are qualitatively identical. The effect of choosing a lower significance
level is that the critical values increase, and the effects from pre-testing become
more pronounced. For the asymptotic mean-squared error (AMSE), we get the usual
picture for PT and PTBG (see Bühlmann and Yu, 2002). Bagging improves the AMSE
compared to pre-testing for a wide range of values of b, except at the extremes. CMBG
compares similarly to CMPT, but shifted towards the right compared to BG and PT.
When looking at any given value b, there are striking differences between the estimators based on the CM-test and the ones based on the t -test. Both CMPT and
CMBG do not perform well for b close to zero, but the AMSE decreases as b increases,
before starting to slightly increase again. For values of b from around 0.5 to 3, CMBG
performs better than BG. For values larger than 3 the estimators PT, BG, and CMBG
perform similarly and get closer as b increases. Thus, the region where CMBG does
not perform well are values of b below 0.5.
The asymptotic biases for CMPT and CMBG are largest at b = 0. For all estimators, the bias can be both positive or negative, depending on b. Bagging can reduce
bias compared to the corresponding pre-test estimation, in particular in the region
where the pre-test estimator has the largest bias. CMPT and CMBG have very low
variance for b close to zero, because the CM-test almost never rejects for these parameters. However, as the null hypothesis is not close to the true b in this region, CMPT
and CMBG are therefore very biased. As b increases slightly, CMBG has the lowest
asymptotic variance for b up to around 3.
The asymptotic results show that imposing a different null hypothesis dramatically changes the characteristics of the estimators. The estimator based on the
CM-test is not intended to work for b very close to zero. In this case, the standard
pre-test estimator has much better properties. For larger b, the CM-based estimators
give substantially better forecasting results. These results highlight that the CM-based
estimators will be useful for relations where the coefficient is expected to be strictly
positive or strictly negative, but too small to exploit with an unrestricted coefficient
estimator.
1.2.2 Asymptotic Shrinkage Representation
Stock and Watson (2012) provide an asymptotic shrinkage representation of the BG
estimator. This representation, henceforth called BGA , is given by
h
i
A
β̂BG
= β̂T 1 − Φ(c − τ̂T ) + τ̂−1
T φ(c − τ̂T )
T
(1.18)
1.2. B AGGING P REDICTORS
9
Abias
1.0
4
AMSE
0
−1.0
1
−0.5
0.0
2
0.5
3
PT
BG
CMPT
CMBG
1
2
3
4
5
0
1
2
b
b
Squared Abias
Avar
3
4
5
3
4
5
0.0
0.0
0.5
0.2
1.0
0.4
1.5
0.6
2.0
2.5
0.8
3.0
1.0
0
0
1
2
3
b
4
5
0
1
2
b
Figure 1.1. Comparison of asymptotic mean-squared error (AMSE), asymptotic bias (Abias),
asymptotic square bias (Abias square), and asymptotic variance (Avar) as a function of b for
5% significance level.
10
C HAPTER 1. B AGGING W EAK P REDICTORS
Abias
1.0
4
AMSE
0
−1.0
1
−0.5
0.0
2
0.5
3
PT
BG
CMPT
CMBG
1
2
3
4
5
0
1
2
b
b
Squared Abias
Avar
3
4
5
3
4
5
0.0
0.0
0.5
0.2
1.0
0.4
1.5
0.6
2.0
2.5
0.8
3.0
1.0
0
0
1
2
3
b
4
5
0
1
2
b
Figure 1.2. Comparison of asymptotic mean-squared error (AMSE), asymptotic bias (Abias),
asymptotic square bias (Abias square), and asymptotic variance (Avar) as a function of b for
1% significance level.
11
1.0
1.2. B AGGING P REDICTORS
0.0
0.2
0.4
0.6
0.8
BGA
CMBGA
0.0
0.2
0.4
0.6
0.8
1.0
^
β
Figure 1.3. Shrinkage estimators BGA and CMBGA (y-axis) for a given value of the unrestricted
parameter estimate β̂ (x-axis) for σ∞ = 0.2 and 5% level. Dotted line is 45◦ line.
and Stock and Watson (2012, Theorem 2) show under general conditions that β̂BG
=
T
A
β̂BG
+
o
(1).
This
allows
computation
without
bootstrap
simulation.
While
bootP
T
strapping can improve test properties, bagging can improve forecasts even without
actual resampling. There is no reason to suspect that the estimator based on the
asymptotic distribution will be inferior to the standard bagging estimator. Therefore,
we consider a version of the bagging estimators that samples from the asymptotic,
rather than the empirical, distribution of β̂T . We can find closed-form solutions
for estimators that do not require bootstrap simulations. The asymptotic version of
CMBG is henceforth referred to as CMBGA.
Proposition 1.3 (Asymptotic Shrinkage representation)
Apply CMBG with the asymptotic distribution of β̂T under Assumption 1.2, then
h
i
−1
β̂CT M BG A = β̂T 1 − Φ(c̃ − τ̂T ) + τ̂−1
(1.19)
T φ(c̃ − τ̂T ) + τ̂T Φ(c̃ − τ̂T ) .
The proof of the proposition is given in the appendix. The representation is very
similar to BGA in Equation (1.18), with an extra term for the contribution for the null
of the CM-test. Note that we can express β̂CT M BG A as the OLS estimator β̂T multiplied
A
by a function that depends on the data only through the t -statistic τ̂T , just like β̂BG
.
T
Figure 1.3 plots BGA and CMBGA against the OLS estimate β̂T . The vertical
deviation from the 45◦ line indicates the degree and direction of shrinkage applied
by the estimator to the OLS estimate β̂T . This reveals the main difference between
BGA and CMBGA. Rather than shrinking towards zero, CMBGA shrinks towards σ̂∞,T ,
12
C HAPTER 1. B AGGING W EAK P REDICTORS
which makes a substantial difference for b close to 0. For larger β̂T , the CMBGA, and
thus CMBG, shrink more heavily downwards than BGA.
1.3 Monte Carlo Simulations
The asymptotic analysis suggests that our modified bagging estimator can yield
significant improvements in MSE for the estimation of β. This section uses Monte
Carlo simulations to investigate the performance for the prediction of y t +h,T in small
samples using the estimators presented above. In our linear model (1.1), lower MSE
for estimation of β can be expected to translate directly into lower MSE for prediction
of y t +h,T .
For the Monte Carlo simulations, we generate data from the following model that
is designed to resemble the empirical application of inflation forecasting:
y t +h,T = µ + βT x t + u t +h
u t +h = ²t +h + θ1 ²t +h−1 + · · · + θh−1 ²t +1
x t = φx t −1 + v t
²t ∼ N (0,σ2² )
v t ∼ N (0,σ2v ).
(1.20)
We allow for serially correlated errors in the form of an MA(h-1) model. The choice
of AR(1) for x t is guided by the model for the monthly unemployment change series
selected by AIC.
As we vary the sample size, the predictor variable x t modeled as a weak predictor
with coefficient βT = T −1/2 bσ∞ . We consider values b ∈ {0, 0.5, 1, 2, 4}. For b = 1, we
are indifferent between estimating β unrestrictedly and no using the predictor variable. For higher (lower) values of b, including the predictor variable should improve
(deteriorate) the forecasting performance. Table 1.1 presents an overview of all these
methods.
We are interested in the small-sample properties and consider sample sizes T ∈
{25, 50, 200}. Furthermore, we set µ = 0.1 and φ = 0.66, which we take from the our
empirical example, i.e., monthly changes in unemployment. Additionally, we consider
φ = 0.9 to investigate the behavior for more persistent processes. Finally, we consider
the forecast horizons h = 1 and h = 6. The MA coefficients are set to θi = 0.4i for
1 ≤ i ≤ h − 1, and 0 otherwise. The critical values are taken from the respective
asymptotic distribution of both tests for significance levels 5% and 1%. We run 10,000
Monte Carlo simulations and use 299 bootstrap replications for bagging.
Columns 2 through 9 of Tables 1.2-1.5 show the MSE for the different estimators
listed in Table 1.1. The last two columns show the rejection frequencies for the t -test
and CM-test. The MSE is reported in excess of var[u t +h ], which does not depend on
the forecasting model, such that the true model with known parameters will have
MSE of zero.
1.3. M ONTE C ARLO S IMULATIONS
13
Table 1.1. Forecasting methods for Monte Carlo and empirical application
Name
Method
Formula for forecast ŷ t +h,T
RE
Restricted Model
µ̂RE
T
UR
Unestricted Model
R + β̂ x
µ̂U
T t
T
PT
Pre-Test t -test
BG
Bagging t -test
BGA
Asymptotic BG
CMPT
Pre-Test CM-test
CMBG
Bagging CM-test
CMBGA
Asymptotic CMBG
R + I(τ̂ > c)β̂ x
µ̂U
T
T t
T
1 PB
U
R
∗
> c)x t
µ̂T + B b=1 β̂b I(τ̂∗
b
h
i
U
R
µ̂T + β̂T 1 − Φ(c − τ̂T ) + τ̂−1
φ(c
−
τ̂
)
xt
T
T
´
³
U
R
−1/2
σ̂∞,T I(τ̂T ≤ c̃) x t
µ̂T + β̂T I(τ̂T > c̃) + T
´
³
R + 1 PB β̂∗ I(τ̂∗ > c̃) + T −1/2 σ̂
∗ ≤ c̃) x
I(
τ̂
µ̂U
t
∞,T
B b=1 b
T
b
b
µ h
i¶
R + β̂
−1 φ(c̃ − τ̂ ) + τ̂−1 Φ(c̃ − τ̂ ) x
µ̂U
1
−
Φ(
c̃
−
τ̂
)
+
τ̂
t
T
T
T
T
T
T
T
Note: µ̂T and β̂T are the OLS estimates that depend on the forecast horizon.
For different values of b, we get the overall patterns expected from the asymptotic
results for all parameter configurations, sample sizes T , persistence parameters φ,
and forecast horizons h. For b = 0 the restricted model is correct. Forecast errors
of the restricted model stem only from mean estimation. The CM-based methods
perform worst, as the null hypothesis b = 1 is incorrect, and the CM-test rejects very
infrequently. The null of the t -test-based pre-test estimator is correct and is imposed
whenever the test fails to reject, which happens frequently under all parameter
configurations. This allows PT and its bagging version to achieve a lower MSE than
the unrestricted model.
For b = 0.5, the predictor is still so weak that the unrestricted model always
performs best. The difference between using t -tests and CM-tests is not as large as
it is for b = 0. Setting b = 1 imposes that the unrestricted and restricted methods
asymptotically have the same MSE for estimation of β. For T = 25, however, the
restricted model has substantially lower MSE than the unrestricted model for the
prediction of y t +h,T . The difference disappears as the sample size grows. The rejection
frequency for the CM-tests is fairly close to the nominal size for h = 1. For h = 6 the
test is over-sized in small samples. Despite these small sample issues of the test, the
CM-based estimators work well when b = 1 even for T = 25 with φ = 0.66 in Tables 1.2
and 1.4. For φ = 0.9, shown in Tables 1.3 and 1.5, CM-test and t -test-based estimators
perform very similarly for T = 25.
For b = 2, the CM-based method is able to improve the MSE, even though the null
hypothesis is not precisely true. The magnitude of the improvement depends on the
persistence parameter φ, critical value, and sample size. For b = 4 the coefficient is
large enough such that the unrestricted model dominates. All other models except
RE provide very similar performance. Both the CM test and the standard significance
14
C HAPTER 1. B AGGING W EAK P REDICTORS
test reject very frequently, such that the different values of the coefficient under the
two null hypotheses are less important.
Our Monte Carlo simulations confirm that the asymptotic properties of the coefficient estimators carry over to the small sample behavior of the estimators and
the resulting forecasting performance for the target variable. The bagging version
of the CM-test can be expected to perform well when bias is not too small relative
to the estimation uncertainty, i.e., b is not close to zero. If bias is much smaller than
estimation uncertainty, then methods that shrink towards zero dominate. Our estimators will work well if the predictor is weak but the coefficient is large enough that
excluding the predictor induces a substantial bias.
1.3. M ONTE C ARLO S IMULATIONS
15
Table 1.2. Monte Carlo Results for φ = 0.66 and c 0.95
MSE
RE
UR
PT
PTBG
T = 25
T = 50
T = 200
6.84
4.69
0.74
21.74
9.41
1.78
11.55
6.19
0.97
12.63
6.42
1.03
T = 25
T = 50
T = 200
19.43
13.07
2.60
42.22
22.34
4.60
28.92
15.97
3.10
30.66
16.60
3.34
T = 25
T = 50
T = 200
9.36
6.63
0.45
20.96
10.69
1.35
15.72
8.18
0.89
T = 25
T = 50
T = 200
27.22
13.82
3.87
46.59
21.23
5.62
38.00
17.03
4.60
PTBGA CMPT
Panel 1: b = 0
h=1
12.54
21.60
6.41
10.07
1.01
1.73
h=6
29.81
39.33
16.20
21.15
3.22
4.84
Panel 2: b = 0.5
h=1
13.95
13.73
15.64
7.50
7.48
7.60
0.68
0.67
0.63
h=6
35.50
35.39
35.07
16.26
15.82
16.12
4.41
4.22
4.16
T = 25
T = 50
T = 200
17.67
9.30
2.06
23.81
10.07
2.26
22.89
10.71
2.43
18.13
8.48
1.89
T = 25
T = 50
T = 200
44.20
17.97
4.85
46.14
20.42
4.80
45.87
20.02
5.28
39.96
17.08
4.36
T = 25
T = 50
T = 200
48.44
21.81
4.87
23.89
9.94
1.83
32.83
14.16
2.76
24.03
10.49
2.03
T = 25
T = 50
T = 200
96.54
46.09
11.64
46.97
21.85
5.16
56.01
27.92
7.14
46.86
22.41
5.48
T = 25
T = 50
T = 200
149.03
74.98
18.29
21.10
9.74
2.74
26.21
11.08
2.84
24.11
10.76
2.90
T = 25
T = 50
T = 200
302.49
147.60
34.15
40.50
21.55
4.31
44.53
22.86
4.57
42.16
22.23
4.45
Panel 3: b = 1
h=1
18.00
14.43
8.42
6.80
1.87
1.35
h=6
40.41
36.61
16.81
13.84
4.22
3.47
Panel 4: b = 2
h=1
23.81
20.79
10.42
9.58
2.01
1.97
h=6
47.80
44.48
22.79
21.54
5.57
5.48
Panel 5: b = 4
h=1
23.82
29.24
10.65
12.84
2.90
3.30
h=6
42.87
51.46
22.68
25.83
4.64
5.59
Rejection %
CMBG
CMBGA
t -test
CM-test
23.39
10.70
1.88
23.19
10.69
1.87
5.90
4.95
4.60
0.60
0.70
0.55
42.17
22.65
5.18
41.16
22.17
5.07
14.00
9.80
5.85
6.10
2.55
1.00
17.50
8.24
0.81
17.28
8.23
0.80
13.20
11.95
13.85
2.70
2.15
1.90
37.03
17.59
4.66
36.43
16.93
4.47
25.15
20.00
14.70
11.90
6.75
2.45
15.48
7.18
1.44
15.50
7.17
1.43
23.80
24.40
27.05
6.10
4.85
5.25
36.49
14.60
3.77
36.24
13.91
3.55
42.60
34.90
26.40
23.40
14.75
7.10
17.56
7.76
1.47
17.57
7.73
1.47
53.75
58.50
63.30
21.35
22.75
25.80
40.44
19.01
4.62
40.79
18.89
4.51
70.95
66.25
64.60
49.05
38.45
29.75
22.90
10.92
3.05
22.78
10.83
3.04
93.65
96.55
98.80
74.20
83.70
90.15
42.16
22.42
4.49
45.52
23.12
4.83
96.95
98.30
98.40
87.95
89.15
87.50
Notes: MSE calculated in excess of var[u t +h ], and multiplied by 100.
16
C HAPTER 1. B AGGING W EAK P REDICTORS
Table 1.3. Monte Carlo Results for φ = 0.9 and c 0.95
MSE
RE
UR
PT
PTBG
T = 25
T = 50
T = 200
6.82
3.60
0.91
39.64
13.89
2.20
22.50
7.72
1.31
23.47
8.07
1.36
T = 25
T = 50
T = 200
21.89
13.49
3.24
84.13
35.24
6.39
55.36
23.79
4.09
57.93
25.76
4.50
T = 25
T = 50
T = 200
12.17
5.35
1.40
41.66
13.74
2.47
27.36
9.70
1.87
T = 25
T = 50
T = 200
24.19
15.02
3.66
77.86
34.25
5.99
56.40
25.23
4.77
PTBGA CMPT
Panel 1: b = 0
h=1
23.49
40.09
8.03
13.36
1.36
2.26
h=6
56.25
78.06
24.68
34.48
4.27
6.46
Panel 2: b = 0.5
h=1
26.12
25.94
32.45
8.84
8.77
9.95
1.69
1.68
1.73
h=6
53.55
52.63
58.47
24.92
23.84
25.05
4.60
4.28
4.48
T = 25
T = 50
T = 200
17.49
8.18
2.07
37.97
12.51
2.37
30.36
11.21
2.36
25.42
8.77
1.83
T = 25
T = 50
T = 200
43.83
24.15
5.71
84.33
30.77
6.25
70.23
28.57
6.52
63.89
24.50
5.46
T = 25
T = 50
T = 200
52.56
24.42
5.57
39.39
14.11
2.47
46.45
18.03
3.51
34.30
13.28
2.60
T = 25
T = 50
T = 200
128.12
59.63
13.22
82.76
30.62
5.93
91.29
38.18
8.41
73.93
29.77
6.17
T = 25
T = 50
T = 200
175.35
83.22
17.05
39.32
13.60
2.08
56.76
18.01
2.43
43.18
15.59
2.37
T = 25
T = 50
T = 200
404.67
205.15
44.01
72.41
33.91
6.15
89.23
41.26
6.91
74.60
35.47
6.34
Panel 3: b = 1
h=1
25.43
23.88
8.68
7.18
1.82
1.31
h=6
63.15
68.45
23.59
21.09
5.17
4.22
Panel 4: b = 2
h=1
34.30
26.47
13.16
10.53
2.58
2.42
h=6
74.94
65.95
29.92
26.82
6.22
5.74
Panel 5: b = 4
h=1
42.70
44.32
15.38
18.25
2.35
2.93
h=6
76.49
79.20
36.85
41.92
6.74
8.38
Rejection %
CMBG
CMBGA
t-test
CM-test
43.48
14.46
2.43
43.14
14.38
2.43
6.98
5.64
5.46
0.80
0.74
0.44
83.07
37.33
7.06
81.65
36.31
6.85
15.20
9.50
5.90
6.45
3.00
0.62
36.03
11.07
1.90
35.62
11.03
1.90
10.64
10.86
11.98
1.62
1.58
1.84
62.08
28.46
5.20
61.08
27.17
4.87
23.90
18.80
14.02
11.25
6.65
2.50
26.93
8.13
1.42
26.92
8.07
1.41
17.24
20.36
23.86
3.36
3.76
4.24
70.06
23.42
4.78
70.60
22.05
4.36
35.15
31.65
27.00
18.00
12.90
7.38
26.01
9.51
1.94
26.06
9.45
1.93
38.94
47.92
59.18
12.62
16.88
23.00
63.16
24.87
4.98
62.30
24.02
4.70
58.20
58.40
59.84
37.95
32.85
27.30
33.49
13.91
2.39
33.57
13.79
2.37
76.24
88.14
96.22
47.18
64.32
81.52
68.59
33.88
6.28
69.56
35.09
6.85
88.25
92.05
96.78
72.60
75.40
81.46
Notes: MSE calculated in excess of var[u t +h ], and multiplied by 100.
1.3. M ONTE C ARLO S IMULATIONS
17
Table 1.4. Monte Carlo Results for φ = 0.66 and c 0.99
MSE
RE
UR
PT
PTBG
T = 25
T = 50
T = 200
7.26
2.93
0.63
21.02
8.09
1.67
10.54
3.67
0.74
11.06
3.88
0.78
T = 25
T = 50
T = 200
18.45
10.35
3.39
42.04
21.45
5.39
26.89
12.85
3.62
27.70
13.90
3.83
T = 25
T = 50
T = 200
11.02
5.62
1.42
21.62
10.08
2.26
14.24
6.73
1.62
T = 25
T = 50
T = 200
21.72
10.73
2.84
42.23
18.75
4.46
32.86
13.89
3.22
PTBGA CMPT
Panel 1: b = 0
h=1
10.91
21.38
3.86
8.12
0.78
1.71
h=6
26.98
37.10
13.24
19.93
3.69
5.52
Panel 2: b = 0.5
h=1
13.30
13.21
15.37
6.29
6.25
6.66
1.48
1.48
1.49
h=6
30.26
30.34
30.34
13.21
12.76
13.12
3.13
2.94
3.01
T = 25
T = 50
T = 200
17.39
8.18
1.75
22.84
9.41
1.89
20.74
9.42
2.04
16.60
7.33
1.51
T = 25
T = 50
T = 200
41.53
20.87
4.52
45.77
22.21
4.81
45.05
22.59
4.98
38.09
19.34
4.17
T = 25
T = 50
T = 200
47.77
20.74
5.27
23.19
9.30
2.15
38.34
16.73
4.22
26.24
11.02
2.79
T = 25
T = 50
T = 200
84.60
48.41
11.17
40.83
22.29
4.88
57.31
34.13
8.70
42.06
24.36
5.77
T = 25
T = 50
T = 200
148.82
71.20
17.51
22.75
9.13
2.31
40.96
14.79
3.05
31.12
12.52
2.92
T = 25
T = 50
T = 200
345.55
154.77
37.91
49.56
20.70
4.95
64.55
29.62
6.22
54.44
23.07
5.36
Panel 3: b = 1
h=1
16.57
12.09
7.34
4.94
1.49
0.88
h=6
39.21
33.44
19.22
15.60
4.02
2.97
Panel 4: b = 2
h=1
26.28
19.64
10.98
8.37
2.76
2.36
h=6
44.63
39.57
25.91
23.47
6.05
5.30
Panel 5: b = 4
h=1
30.87
40.47
12.37
18.16
2.90
4.40
h=6
57.74
67.39
25.63
32.96
5.95
8.45
Rejection %
CMBG
CMBGA
t-test
CM-test
22.26
8.48
1.76
21.98
8.42
1.76
1.42
1.20
1.04
0.16
0.04
0.10
39.17
20.95
5.73
37.99
20.23
5.62
8.15
3.95
1.56
3.45
1.45
0.30
16.26
7.06
1.57
16.03
7.02
1.57
3.74
3.62
3.06
0.72
0.36
0.26
31.96
14.29
3.35
31.05
13.48
3.15
16.95
8.90
4.42
7.65
3.10
0.60
12.92
5.24
0.95
12.71
5.22
0.94
9.08
9.96
9.46
1.54
1.34
0.94
33.88
16.46
3.29
33.13
15.50
3.03
28.55
17.80
9.88
14.70
7.60
1.98
16.40
6.61
1.81
16.56
6.65
1.80
33.86
33.70
37.70
10.12
9.18
10.20
34.09
19.73
4.36
34.60
20.04
4.25
52.35
50.10
41.00
32.35
26.25
13.84
27.96
12.44
3.20
28.23
12.46
3.18
80.40
88.04
93.68
53.04
62.20
70.42
54.12
23.24
5.59
57.22
26.19
6.52
93.55
92.10
95.06
81.45
75.20
76.14
Notes: MSE calculated in excess of var[u t +h ], and multiplied by 100.
18
C HAPTER 1. B AGGING W EAK P REDICTORS
Table 1.5. Monte Carlo Results for φ = 0.9 and c 0.99
MSE
RE
UR
PT
PTBG
T = 25
T = 50
T = 200
7.70
3.76
0.93
40.13
13.34
2.30
20.26
6.46
1.15
21.18
6.74
1.19
T = 25
T = 50
T = 200
21.81
10.76
2.04
81.10
32.52
5.17
48.04
18.37
2.56
50.87
19.82
2.92
T = 25
T = 50
T = 200
12.23
6.25
1.22
40.64
14.07
2.26
22.78
9.09
1.48
T = 25
T = 50
T = 200
32.00
16.48
3.49
80.32
35.75
5.95
54.94
25.92
4.23
PTBGA CMPT
Panel 1: b = 0
h=1
20.98
40.48
6.69
13.41
1.19
2.23
h=6
48.61 106.76
18.55
32.27
2.64
5.09
Panel 2: b = 0.5
h=1
21.74
21.49
30.68
8.60
8.58
10.67
1.34
1.34
1.39
h=6
53.58
51.93
61.73
25.07
23.81
27.35
4.21
3.86
4.04
T = 25
T = 50
T = 200
17.16
9.25
1.86
39.57
13.95
2.14
27.37
11.92
2.19
24.15
9.85
1.67
T = 25
T = 50
T = 200
54.06
23.57
5.52
82.65
32.24
6.48
69.21
28.70
6.27
61.55
24.88
5.49
T = 25
T = 50
T = 200
46.31
23.56
5.31
36.34
13.75
2.26
43.88
20.90
4.28
31.21
14.05
2.79
T = 25
T = 50
T = 200
127.54
59.66
14.60
81.29
33.76
6.58
100.74
47.63
11.28
75.29
34.44
7.47
T = 25
T = 50
T = 200
173.15
80.02
17.42
36.79
13.95
2.14
73.40
27.18
3.57
48.12
19.35
2.98
T = 25
T = 50
T = 200
420.58
206.64
43.69
82.77
31.99
5.67
133.75
56.46
9.06
94.18
36.95
6.31
Panel 3: b = 1
h=1
24.13
24.46
9.84
7.70
1.66
1.07
h=6
60.74
72.97
23.95
19.93
5.15
3.80
Panel 4: b = 2
h=1
31.50
21.13
14.03
8.79
2.76
2.16
h=6
77.74
64.03
35.36
28.31
7.87
6.66
Panel 5: b = 4
h=1
48.62
49.19
19.18
22.39
2.96
4.42
h=6
102.51 105.78
42.37
48.94
7.53
10.86
Rejection %
CMBG
CMBGA
t-test
CM-test
42.39
13.90
2.31
41.94
13.83
2.30
1.28
1.10
0.96
0.08
0.08
0.04
110.36
34.25
5.46
108.59
33.05
5.25
5.62
3.78
1.66
2.00
1.02
0.18
32.76
11.27
1.49
32.24
11.19
1.48
3.00
3.46
3.48
0.26
0.46
0.24
67.38
29.73
4.58
63.71
28.22
4.22
14.35
11.10
4.58
7.50
3.35
0.64
26.54
8.30
1.14
26.08
8.24
1.13
5.86
7.88
7.90
0.84
0.94
0.88
73.95
22.05
4.43
73.73
20.35
3.92
18.10
14.76
10.82
8.26
4.86
1.98
20.94
7.97
1.72
20.74
7.96
1.71
17.50
24.50
33.78
4.28
6.28
8.60
58.88
25.72
5.71
59.03
25.16
5.48
39.90
34.58
37.94
21.04
15.44
12.38
33.34
15.46
3.03
34.30
15.48
3.00
57.28
71.10
87.96
29.28
42.02
62.18
82.30
34.13
6.23
87.08
37.89
7.65
76.10
80.70
88.52
58.90
60.90
64.06
Notes: MSE calculated in excess of var[u t +h ], and multiplied by 100.
1.4. A PPLICATION TO CPI I NFLATION F ORECASTING
19
1.4 Application to CPI Inflation Forecasting
Inflation is a key macroeconomic variable, measuring changes in consumer price
levels. Clearly, these price levels depend on the demand and supply for production
and consumer goods. Thus, one would expect them to be linked negatively to unemployment and positively to industrial production. While economists and the media
pay attention to such variables to assess inflationary pressure, the variables do not
help to forecast inflation more accurately than univariate models. Cecchetti, Chu,
and Steindel (2000) find that using popular candidate variables as predictors fails to
provide more accurate forecasts for US inflation, and that the relationship between
inflation and some of the predictors is of the opposite sign as one would expect. Thus,
they conclude that single predictor variables provide unreliable inflation forecasts.
Atkeson and Ohanian (2001) consider more complex autoregressive distributedlags models for inflation forecasting and conclude that none of the models outperform a random walk model. Stock and Watson (2007) argue that the relative
performance of inflation forecasting methods depends crucially on the time period
considered. Not only does the relative performance of forecasting methods change
over time, but coefficients in the models are also likely to be time-varying. Stock and
Watson (2009) go so far as to call it the consensus that including macroeconomic
variables in models does not improve inflation forecasts over univariate benchmarks
that do not utilize information other than past inflation.
We denote inflation by
πht = ln(P t +h /P t ),
(1.21)
where P t is the level of the US consumer price index (CPI, All Urban Consumers: All
Items). We specify our models in terms of changes in inflation and aim to forecast
these changes for different forecast horizons h. We define the change in inflation
as ∆πht = h −1 πht − π1t −1 , i.e, the change of average inflation over the next h month
compared to the most recent inflation rate. The forecast models are the specified as
∆πht = µ + βx t + ²t +h ,
(1.22)
where x t is some predictor variable. For example, with a forecast horizon of 6 months
(h = 6), we forecast the change in average inflation over the next 6 months compared
to the current month’s inflation. Figure 1.4 shows the target variable ∆πht for different
forecast horizons h. Even at the longest forecast horizon of 12 months, where we are
forecasting annual inflation, the series is not very persistent. The estimation methods
used to determine the parameters are the same as the ones used for the Monte Carlo
simulations and are summarized in Table 1.1.
As predictor variables x t , we use unemployment changes (UNEMP) and growth
in industrial production (INDPRO). Both variables are seasonally adjusted. We use
the latest data vintage available from St. Louis Fed’s FRED1 on August 21, 2013 for
1 URL: http://research.stlouisfed.org/fred2
20
C HAPTER 1. B AGGING W EAK P REDICTORS
monthly data over the period 1:1948–7:2013. Considering changes in unemployment
and growth in industrial production rather than the levels of the two series ensures
that the predictor variables are stationary.
For multiple-step ahead forecasts, we choose a direct forecasting approach. Thus,
the test statistics and parameter estimates depend on the forecasting horizon and
can differ. For all forecast horizons, we use a short estimation window to allow for
parameter instability. We use estimation window lengths of 24 and 60 months, which
are reasonable sample sizes as we use only one predictor variable.
Bagging is conducted using a block bootstrap with block-length optimally chosen
by the method of Politis and White (2004), applying the correction of Patton, Politis,
and White (2009). For multiple-month forecasts (h > 1), we calculate standard errors
using the method of Newey and West (1987) to account for serial correlation.
In Table 1.6, we show the MSE results for the pseudo out-of-sample forecasting
exercise. The maximal out-of-sample period depends on the estimation window
length m and the forecast horizon h. For example, for m = 24 and h = 6 we forecast
inflation over 3:1953–7:2013 (725 observations) and for m = 60 and h = 6 over 3:1956–
7:2013 (689 observations).
The first observation, in line with the existing literature on inflation forecasting, is
that the restricted model is very hard to beat. The unrestricted model never performs
better than the restricted model. The relative performance of the forecasting methods
depends on the forecast horizon h. We apply the model confidence set of Hansen,
Lunde, and Nason (2011) to the resulting loss series in order to determine whether
the out-of-sample results are statistically significant. The MulCom package version
3.002 for the Ox programming language (see Doornik, 2007) is used to construct the
model confidence sets3 .
The forecasting results show that the performance of the models is hard to distinguish statistically. The model confidence set contains many models in most cases. In
particular, CMBGA and CMBG are never excluded from the 95% model confidence
set, such that there is no statistical evidence against these two forecasting methods. In
terms of mean-squared error, CMBGA and CMBG perform well compared to standard
bagging, BG and BGA, and the unrestricted model. The different critical values, for
the significance levels 5% and 1%, have only a minor effect on the performance of
the predictors.
The performance differences between the bootstrap and the asymptotic versions
of the bagging estimators are small. Thus, the asymptotic versions BGA and CMBGA
offer computationally attractive alternatives to the bootstrap-based predictors BG
and CMBG.
Figures 1.5 and 1.6 display the time series of coefficients from unrestricted esti2 Available from the homepage http://mit.econ.au.dk/vip_htm/alunde/MULCOM/MULCOM.HTM.
3 We use the following settings for the model confidence set construction in MulCom: 9999 boostrap
replication with block bootstrapping, block size equal to forecasting horizon, the range test for equal
predictive ability δR,M , and the range elimination rule e R,M , see Hansen et al. (2011) for details.
1.4. A PPLICATION TO CPI I NFLATION F ORECASTING
21
Table 1.6. MSE relative to restricted model for out-of-sample inflation forecasting.
Panel 1: m = 24
c 0.99
h=
RE
1
1*
3
1*
UR
PT
BGA
BG
CM
CMBGA
CMBG
1.095*
1.050*
1.032*
1.036*
1.079
1.018*
1.023*
1.116
1.050*
1.060*
1.057*
1.075*
1.041*
1.038*
UR
PT
BGA
BG
CM
CMBGA
CMBG
1.059
1.004*
1.006*
1.006*
1.025*
0.991*
0.991*
1.092
0.994*
1.015*
1.025*
1.043*
0.999*
1.011*
c 0.95
6
1*
12
1*
INDPRO
1.072 1.142
0.994* 0.995*
1.004* 1.018
1.009* 1.015
1.038* 1.009*
0.993* 0.986*
0.999* 0.991*
UNEMP
1.080* 1.066
1.007* 0.973*
1.008* 0.980*
1.012* 0.980*
1.007* 0.981*
0.988* 0.965*
0.992* 0.967*
1
1*
3
1*
6
1*
12
1*
1.095*
1.046*
1.048*
1.053*
1.117
1.033*
1.037*
1.116
1.076*
1.067*
1.067*
1.079*
1.052*
1.050*
1.072
1.005*
1.018*
1.025*
1.042*
1.000*
1.004*
1.142
1.020*
1.049
1.045
1.020*
1.005*
1.008*
1.059
1.010*
1.019*
1.020*
1.044*
1.001*
1.000*
1.092
1.042*
1.035*
1.043*
1.044*
1.010*
1.021*
1.080
1.020*
1.026*
1.027*
1.015*
0.999*
1.003*
1.066*
1.020*
0.991*
0.989*
0.983*
0.973*
0.973*
Panel 2: m = 60
c 0.99
h=
RE
1
1*
3
1*
UR
PT
BGA
BG
CM
CMBGA
CMBG
1.019*
1.001*
0.999*
0.999*
1.025*
0.999*
1.000*
1.057*
1.003*
1.014*
1.021*
1.026*
1.003*
1.010*
UR
PT
BGA
BG
CM
CMBGA
CMBG
1.022*
0.998*
1.000*
1.001*
1.005*
0.996*
0.997*
1.042*
1.004*
1.009*
1.014*
1.018*
1.000*
1.003*
6
1*
12
1
INDPRO
1.024* 1.021
1.002* 1.000*
1.000* 0.995*
1.000* 0.995*
1.025* 1.011*
0.999* 0.995*
0.999* 0.993*
UNEMP
1.044* 1.018*
1.012* 1.006*
1.019* 0.997*
1.029* 0.995*
1.003* 0.997*
1.002* 0.991*
1.009* 0.988*
c 0.95
1
1*
3
1*
6
1*
12
1*
1.019*
0.998*
1.002*
1.003*
1.025*
1.000*
1.000*
1.057*
1.005*
1.026*
1.032*
1.030*
1.009*
1.016*
1.024*
1.005*
1.004*
1.004*
1.025*
1.000*
0.999*
1.021
0.999*
0.997*
0.998*
1.012*
0.995*
0.994*
1.022*
1.000*
1.004*
1.005*
1.005*
0.998*
0.999*
1.042*
1.022*
1.021*
1.025*
1.022*
1.006*
1.009*
1.044*
1.043*
1.031*
1.037*
1.005*
1.011*
1.021*
1.018*
1.003*
1.002*
1.000
1.001*
0.994*
0.992*
Notes: An asterisk (*) indicates that the model is included in 95% model confidence set (MCS). The MCS
are computed for all methods with same m, c, and h, i.e., for every column in each panel. Thus, each MCS
is computed for 15 models.
22
C HAPTER 1. B AGGING W EAK P REDICTORS
mation and CMBGA for m = 24 and m = 60, respectively. For m = 24, the coefficients
from unrestricted estimation are very volatile and frequently change sign for both predictor variables. CMBGA imposes the sign restriction by construction and shrinks the
coefficients heavily towards the null hypothesis, which results in much less volatile
coefficients. For m = 60, the coefficients from unrestricted estimation are more stable, and sign changes of the coefficients are less frequent. CMBGA again shrinks the
coefficients substantially and imposes the sign restriction.
Overall, the proposed methods CMBG and CMGA provide competitive forecasting
results and are never excluded from the model confidence set. We find, however, that
no method is significantly better than the random walk benchmark, i.e., the forecasts
from the restricted model. Inflation is a difficult time series to forecast and using other
economic variables as predictors is of limited value in the framework considered in
this paper.
1.5 Conclusion
Bootstrap aggregation (bagging) is typically applied to t -tests of whether coefficients
are significantly different from zero. In finite samples, a significantly non-zero coefficient is not sufficient to guarantee that including the predictor improves forecast
accuracy. Instead, estimation variance has to be taken into account and weighed
against bias from excluding the predictor.
We propose a novel bagging estimator that is based on the in-sample test for
predictive ability of Clark and McCracken (2012), which directly addresses the biasvariance trade-off. We show that this estimator performs well when bias and variance
are of similar magnitude. This is achieved by shrinking the coefficient towards an
estimate of the estimation variance rather than shrinking towards zero. In order
to find this shrinkage target, the sign of the coefficient has to be known. Thus, the
method is appropriate for predictor variables for which theory postulates the sign of
the relation, as is often the case for economic variables.
The new bagging estimator is shown to have good asymptotic properties, dominating the standard bagging estimator if bias and estimation variance are of similar
magnitude. If, however, the data-generating coefficient is very close to zero, such
that the forecasting power of the predictor is completely dominated by estimation
uncertainty, the new estimator is very biased and thus performs poorly.
In this chapter, we have been concerned with improving accuracy of a single
predictor variable when predictive power is diluted by estimation variance. Using
single predictors for forecasting is important, as many inflation predictors, for example, are considered individually to assess their predictive power (cf. Cecchetti
et al., 2000). Econometric forecasting models, however, typically include multiple
correlated predictor variables. In this context, our estimator could be applied to the
individual predictor variables, just as standard bagging is applied in this context by,
1.5. C ONCLUSION
23
e.g., Inoue and Kilian (2008). The drawbacks of applying our estimator in this context
to each predictor is that, first, it is harder to motivate sign restrictions on coefficients
and, second, covariances are ignored when assessing the estimation uncertainty.
The second issue can be fixed by using orthogonal factors instead of the original
predictors, which makes it potentially even harder to find credible sign restrictions.
The extension to multivariate specifications is left to future research.
24
C HAPTER 1. B AGGING W EAK P REDICTORS
−5
0
5
h= 1
1960
1980
2000
−6
−4
−2
0
2
4
6
h= 3
1960
1980
2000
−6 −4 −2
0
2
4
6
h= 6
1960
1980
2000
−6 −4 −2
0
2
4
6
8
h= 12
1960
1980
2000
Figure 1.4. Time series of the target variables ∆πht at the different forecasting horizons h.
25
0
1.5. C ONCLUSION
OLS
CMBGA
−4
−2
0
2
4
0
1950
1960
1970
UR
CMBGA
1980
1990
2000
2010
−1.0
0.0
1.0
(a) Coefficients for unemployment changes (UNEMP).
1950
1960
1970
1980
1990
2000
2010
(b) Coefficients for industrial production growth (INDPRO).
Figure 1.5. Recursive coefficients for UR and CMBGA in forecast regressions of inflation
changes on (a) unemployment changes and (b) industrial production growth. Forecast horizon
h = 12 and significance level 1%. Estimation window length m = 24.
C HAPTER 1. B AGGING W EAK P REDICTORS
0
26
OLS
CMBGA
−1
0
1
2
3
4
0
1960
1970
1980
1990
2000
2010
−0.5
0.0
0.5
1.0
(a) Coefficients for unemployment changes (UNEMP).
1960
1970
1980
1990
2000
2010
(b) Coefficients for industrial production growth (INDPRO).
Figure 1.6. Recursive coefficients for UR and CMBGA in forecast regressions of inflation
changes on (a) unemployment changes and (b) industrial production growth. Forecast horizon
h = 12 and significance level 1%. Estimation window length m = 60.
1.6. R EFERENCES
27
1.6 References
Atkeson, A., Ohanian, L. E., 2001. Are phillips curves useful for forecasting inflation?
Federal Reserve Bank of Minneapolis Quarterly Review 25 (1), 2–11.
Breiman, L., 1996. Bagging predictors. Machine Learning 24, 123–140.
Bühlmann, P., Yu, B., 2002. Analyzing bagging. The Annals of Statistics 30 (4), 927–961.
Campbell, J. Y., Thompson, S. B., 2008. Predicting excess stock returns out of sample:
Can anything beat the historical average? Review of Financial Studies 21 (4), 1509–
1531.
Cecchetti, S. G., Chu, R. S., Steindel, C., 2000. The unreliability of inflation indicators.
Federal Reserve Bank of New York: Current Issues in Economics and Finance. 4 (6).
Cheung, Y.-W., Chinn, M. D., Pascual, A. G., 2005. Empirical exchange rate models
of the nineties: Are any fit to survive? Journal of International Money and Finance
24 (7), 1150–1175.
Clark, T. E., McCracken, M. W., 2012. In-sample tests of predictive ability: A new
approach. Journal of Econometrics 170 (1), 1–14.
Doornik, J. A., 2007. Object-Oriented Matrix Programming Using Ox, 3rd ed. Timberlake Consultants Press and Oxford: www.doornik.com., London.
Gordon, I. R., Hall, P., 2009. Estimating a parameter when it is known that the parameter exceeds a given value. Australian & New Zealand Journal of Statistics 51 (4),
449–460.
Hansen, P. R., Lunde, A., Nason, J. M., 3 2011. The model confidence set. Econometrica
79 (2), 453–497.
Hillebrand, E., Lee, T.-H., Medeiros, M. C., 2013. Bagging constrained equity premium
predictors. In: Haldrup, N., Meitz, M., Saikkonen, P. (Eds.), Essays in Nonlinear
Time Series Econometrics (Festschrift for Timo Teräsvirta). Oxford University Press
(forthcoming).
Hillebrand, E., Medeiros, M. C., 2010. The benefits of bagging for forecast models of
realized volatility. Econometric Reviews 29 (5-6), 571–593.
Inoue, A., Kilian, L., 2008. How useful is bagging in forecasting economic time series?
a case study of us consumer price inflation. Journal of the American Statistical
Association 103 (482), 511–522.
Meese, R. A., Rogoff, K., 1983. Empirical exchange rate models of the seventies: do
they fit out of sample? Journal of International Economics 14 (1), 3–24.
28
C HAPTER 1. B AGGING W EAK P REDICTORS
Newey, W. K., West, K. D., 1987. A simple, positive semi-definite, heteroskedasticity
and autocorrelation consistent covariance matrix. Econometrica 55 (3), 703–708.
Patton, A., Politis, D. N., White, H., 2009. Correction to "automatic block-length
selection for the dependent bootstrap" by d. politis and h. white. Econometric
Reviews 28 (4), 372–375.
Pettenuzzo, D., Timmermann, A., Valkanov, R., 2013. Forecasting stock returns under
economic constraints. CEPR Discussion Papers No. 9377.
Politis, D. N., White, H., 2004. Automatic block-length selection for the dependent
bootstrap. Econometric Reviews 23 (1), 53–70.
Rapach, D. E., Strauss, J. K., 2010. Bagging or combining (or both)? an analysis based
on forecasting us employment growth. Econometric Reviews 29 (5-6), 511–533.
Stock, J. H., Watson, M. W., 2007. Why has us inflation become harder to forecast?
Journal of Money, Credit and Banking 39 (s1), 3–33.
Stock, J. H., Watson, M. W., 2009. Phillips curve inflation forecasts. In: Fuhrer, J., Kodrzycki, Y., Little, J., Olivei, G. (Eds.), Understanding Inflation and the Implications
for Monetary Policy: A Phillips Curve Retrospective. MIT Press, pp. 99–186.
Stock, J. H., Watson, M. W., 2012. Generalized shrinkage methods for forecasting using
many predictors. Journal of Business & Economic Statistics 30 (4), 481–493.
1.7. A PPENDIX
29
1.7 Appendix
1.7.1 Proof of Proposition 1.2
The proof follows Bühlmann and Yu (2002), Proposition 2.2. From Assumption 1.1
and βT = T −1/2 bσ∞ , we get
d
1/2 −1
T 1/2 σ̂−1
σ∞ β̂T −
→ Z + b.
∞,T β̂T = (σ̂∞,T /σ∞ ) × T
For CMPT we have
C MPT
T 1/2 σ̂−1
∞,T β̂T
=
1/2 −1
T 1/2 σ̂−1
σ̂∞,T β̂T ≥ c̃)
∞,T β̂T 1(T
−1/2
+T 1/2 σ̂−1
σ̂∞,T 1(T 1/2 σ̂−1
∞,T T
∞,T β̂T < c̃)
=
1/2 −1
T 1/2 σ̂−1
σ̂∞,T β̂T ≥ c̃) + 1(T 1/2 σ̂−1
∞,T β̂T 1(T
∞,T β̂T < c̃).
The expression on the last line is a function of T 1/2 σ̂−1
∞,T β̂T that is continuous except
for points of measure 0, therefore the continuous mapping theorem applies and we
get
d
1/2 −1
T 1/2 σ̂−1
σ̂∞,T β̂T ≥ c̃)+1(T 1/2 σ̂−1
→ (Z +b)1(Z +b ≥ c̃)+1(Z +b < c̃),
∞,T β̂T 1(T
∞,T β̂T < c̃) −
where Z is a standard normal random variable.
Next consider the bagged version,
C M BG
T 1/2 σ̂−1
∞,T β̂T
=
=
B
1 X
∗
1/2 −1
[T 1/2 σ̂−1
σ̂∞,T β̂∗b > c̃)
∞ β̂b I(T
B b=1
∗ −1/2
∗
+T 1/2 σ̂−1
σ̂∞,T β̂∗b I(T 1/2 σ̂−1
∞,T β̂b T
∞,T β̂b ≤ c̃)],
i
B h
1 X
∗
∗
1/2 −1
∗
∗
1/2 −1
T 1/2 σ̂−1
β̂
I(T
σ̂
σ̂
β̂
≤
c̃)
.
β̂
>
c̃)
+
β̂
I(T
∞,T b
∞,T b
∞,T b
b
B b=1
From Assumption 1.2, we get
d∗
T 1/2 (β̂∗T − β̂T ) −−→ N (0,σ2∞ ),
and thus
d∗
∗
T 1/2 σ̂−1
−→ N (0,1).
∞,T (β̂T − β̂T ) −
This can be expressed as
d
T 1/2 σ̂−1
∞,T β̂T
−
→
∗
T 1/2 σ̂−1
∞,T β̂T
d∗
−−→
Z + b, Z ∼ N (0,1),
W ∼ | Z N (Z + b,1),
30
C HAPTER 1. B AGGING W EAK P REDICTORS
where W ∼ | Z denotes the distribution of W conditional on Z . Then, again using
continuity almost everywhere of the estimator, we get
B h
X
1
B
b=1
d∗
i
∗
1/2 −1
∗
T 1/2 σ̂−1
σ̂∞,T β̂∗b > c̃) + I(T 1/2 σ̂−1
∞,T β̂b I(T
∞,T β̂b ≤ c̃) ,
£
¤
EW W I(W > c̃) + I(W ≤ c̃)|Z ,
£
¤
£
¤
EW [W |Z ] − EW W I(W ≤ c̃)|Z + EW I(W ≤ c̃)|Z ,
£
¤
Z + b − EW W I(W ≤ c̃)|Z + Φ(c̃ − Z − b).
−−→
=
=
Next, we use that for x ∼ N (m,1) we have (see Eqn. (6.3) in Bühlmann and Yu,
2002),
E[xI(x ≤ k)] = mΦ(k − m) − φ(k − m),
and thus
£
¤
Z + b − EW W I(W ≤ c̃)|Z + Φ(c̃ − Z − b),
=
Z + b − (Z + b)Φ(c̃ − Z − b) + φ(c̃ − Z − b) + 1 − Φ(Z + b − c̃),
=
Z + b − (Z + b)(1 − Φ(Z + b − c̃)) + φ(c̃ − Z − b) + 1 − Φ(Z + b − c̃),
=
(Z + b)Φ(Z + b − c̃) + φ(Z + b − c̃) + 1 − Φ(Z + b − c̃),
which completes the proof.
1.7.2 Proof of Proposition 1.3
Let β A ∼ N (β̂T ,T −1 σ̂2∞,T ), the random variable sampled from the asymptotic distri-
bution of the OLS estimator for given β̂T and σ̂∞,T . First, we consider the asymptotic
version of the standard bagging estimator, BGA. By the same arguments as used in
the proof of Proposition 1.2 we get
A
β̂BG
T
=
E[β A I(T 1/2 σ̂−1
∞,T β A > c)]
=
β̂ − E[β A I(T 1/2 σ̂−1
∞,T β A ≤ c)]
=
1/2 −1
β̂T − T −1/2 σ̂∞,T E[T 1/2 σ̂−1
σ̂∞,T β A ≤ c)]
∞,T β A I(T
=
−1/2
β̂T − β̂T Φ(c − T 1/2 σ̂−1
σ̂∞,T φ(c − T 1/2 σ̂−1
∞,T β̂T ) + T
∞,T β̂T ).
With τ̂T = T 1/2 σ̂−1
∞,T β̂T we get
A
β̂BG
T
=
=
£
¤
β̂ 1 − Φ(c − τ̂T ) + T −1/2 σ̂∞,T φ(c − τ̂T ),
h
i
β̂ 1 − Φ(c − τ̂T ) + τ̂−1
T φ(τ̂T − c) .
1.7. A PPENDIX
31
We proceed along the same lines for β̂CT M BG A :
β̂CT M BG A
=
−1/2
E[β A I(T 1/2 σ̂−1
σ̂∞,T I(T 1/2 σ̂−1
∞,T β A > c̃) + T
∞,T β A ≤ c̃)]
=
−1/2
E[β A I(T 1/2 σ̂−1
σ̂∞,T I(T 1/2 σ̂−1
∞,T β > c̃)] + E[T
∞,T β A ≤ c̃)]
=
A
β̂BG
+ T −1/2 σ̂∞,T E[I(T 1/2 σ̂−1
∞,T β A ≤ c̃)]
T
=
A
β̂BG
+ T −1/2 σ̂∞,T Φ(c̃ − τ̂T ),
T
which gives the desired result:
h
i
−1/2
β̂CT M BG A = β̂ 1 − Φ(c̃ − τ̂T ) + τ̂−1
σ̂∞,T Φ(c̃ − τ̂T ),
T φ(c̃ − τ̂T ) + T
h
i
−1
= β̂ 1 − Φ(c̃ − τ̂T ) + τ̂−1
T φ(c̃ − τ̂T ) + τ̂T Φ(c̃ − τ̂T ) .
CHAPTER
2
R ETURN P REDICTABILITY, M ODEL
U NCERTAINTY, AND R OBUST I NVESTMENT
Manuel Lukas
Aarhus University and CREATES
Abstract
Under stock return predictability, investment decisions are based on conditional
expectations of stock returns. The choice of appropriate predictor variables is, however, subject to great uncertainty. In this chapter, we use the model confidence set
approach to quantify uncertainty about expected utility from stock market investment, accounting for potential return predictability, over the sample period 1966:01–
2002:12. We find that confidence sets imply economically large and time-varying
uncertainty about expected utility from investment. We propose investment strategies aimed at reducing the impact of model uncertainty. Reducing model uncertainty
requires lower investment in stocks, but the return predictability still leads to economic gains for investors.
Keywords: Return predictability, Model uncertainty, Model confidence set, Portfolio
choice, Loss function.
33
34 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
2.1 Introduction
There is substantial disagreement regarding the relevant conditioning variables,
model specification, and economic significance of stock return predictability. The
large literature on return predictability documents that certain variables, for example
valuation ratios, help predicting stock market excess returns (see, e.g., Fama and
French, 1988; Barberis, 2000; Lewellen, 2004; Ang and Bekaert, 2007; Lettau and Ludvigson, 2010, among many other studies). Strongly supportive evidence of predictive
power mostly stems from in-sample analysis. Robustness, stability, and economic
significance of predictability is still disputed, as out-of-sample result are much less
conclusive (Timmermann, 2008). For example, Welch and Goyal (2008) find that
forecasts based on the historical average (HA) are not consistently outperformed by
a wide range of predictor variables in univariate predictive regression, and that the
performance of predictive regressions changes over time. In some periods certain
variables seem to predict excess returns, while, in other periods, return prediction
models perform poorly.
The evidence on return predictability is not only sensitive to whether we look
at in-sample or out-of-sample performance, but also to the measure by which return forecasts are evaluated (see, e.g., Pesaran and Timmermann, 1995). Kandel and
Stambaugh (1996) use an economic measure based on the real-time performance of
an investor, which provides a more relevant performance measure than statistical
criteria. Cenesizoglu and Timmermann (2012) document that statistical measures
are not very informative about the performance with economic measures.
Several empirical studies accounted for model uncertainty, rather than investigating return predictability for single model specifications. Cremers (2002) documents
that even when taking model uncertainty into account by Bayesian model averaging,
return prediction models are superior to unconditional forecasts. Using Bayesian
model averaging followed by optimal investment within the average model, Avramov
(2002) finds that the Bayesian investor successfully uses return prediction models
for portfolio choice. Wachter and Warusawitharana (2009) consider a Bayesian investor who puts low prior probability on return predictability. Even though this
investor is skeptical about return predictability, the predictive content in the data is
strong enough to influence investment decisions. Dangl and Halling (2012) consider
a Bayesian investor who averages over time-varying coefficients models and find
robust economic gains from return predictability both during recessions and expansions. Aiolfi and Favero (2005) document that asset allocation based on multiple
models, rather than a single model, can increase investors’ utility. Using forecast
combination, Rapach, Strauss, and Zhou (2010) find that the historical average can
be significantly outperformed, even when the individual forecasts perform poorly.
Overall, there is evidence that return prediction models can benefit investors,
even when the investment decision takes model uncertainty into account. Investment
strategies based on multiple models are able to increase the unconditional expected
2.1. I NTRODUCTION
35
utility, as measured by the average over many sequential investment decisions. Model
uncertainty induces uncertainty about the conditional expectation of utility. Previous
approaches do not investigate and measure this uncertainty. In particular, it is ignored
how an investment strategy would perform under other reasonable return models.
In this chapter, we use the model confidence set approach of Hansen et al. (2011)
to quantify the uncertainty stemming from potential return predictability. In particular, we construct confidence sets for the expected utility from investment based on
the models in the model confidence set. For this, we consider a small investor with
CRRA utility, who allocates wealth to stocks and the risk-free asset. The confidence
sets contain expected utility under the return models that are not rejected by the data
for a given confidence level. Return predictability implies that expected utilities, and
thus the confidence sets, depend on the predictor variables of the return prediction
models. First, we construct such confidence sets for a standard investor who does
not use a return prediction model, but relies on the historical average (HA) of returns
to estimate expected returns. Second, we consider investment strategies that are
designed to reduce uncertainty about expected utility for a given set of models. A
robust strategy is proposed for which the investor chooses stock investment such that
the minimal element of the confidence set is maximized. This corresponds to maximizing the lowest expected utility over all models in the confidence set. Additionally,
we consider two less conservative investment strategies; one based on averaging and
one based on the majority forecasts along the lines of Aiolfi and Favero (2005) that
also take into account the model uncertainty as measured by the model confidence
set.
The methodology described above is applied to monthly returns on the US stock
market for 1945:12–2002:12. The potential predictors are 14 variables from the popular data set of Welch and Goyal (2008). Each of the variables is used individually in a
simple regression model. Additionally we consider multivariate prediction strategies
based on principal components and on the complete subset regression of Elliott,
Gargano, and Timmermann (2013).
For this model universe of 23 models, the model uncertainty is substantial: In the
beginning of the out-of-sample period 1966–2002, no model can be excluded from
the model confidence set in real-time for common confidence levels. The large model
uncertainty translates into a large economic uncertainty regarding expected utility.
As we move further on in the sample, models start getting excluded from the model
confidence set and the uncertainty regarding the expected utility is reduced. The magnitude of uncertainty, measured by width of confidence sets, changes significantly
with the predictor variables. The robust investment strategy leads to investments that
are much lower than for the HA model, in particular in the first half of our sample. All
of the proposed investment strategies lead to economic out-of-sample gains from
return prediction. There are gain from return prediction both during recessions and
expansions, but during recession the gains are substantially higher.
36 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
Our findings add to the literature on model uncertainty in stock return prediction.
The economic significance of uncertainty about conditional expected utility under
different return predictions models has not been documented before. Our approach
reveals that model uncertainty translates into substantial uncertainty about expected
utility from investing in stocks, which varies over time and becomes less pronounced
later in the sample. We show that, for the universe of models considered in this
chapter, it is possible for investors to benefit from return predictability while reducing
the model uncertainty.
The remainder of the chapter is structured as follows: In section 2.2 we present
the investment problem, the econometric approach for constructing confidence sets,
and the confidence set based investment strategies. Section 2.3 discusses data and
models used in the empirical analysis. Section 2.4 presents the empirical results.
Concluding remarks are given in section 2.5.
2.2 Investment and Confidence Sets
This section sets up the investment problem, presents the econometric methodology
for model confidence set construction, and presents the investment strategies based
on the model confidence set.
2.2.1 The Investment Problem
We study the real-time investment decisions of an small investor in the spirit of Kandel
and Stambaugh (1996). The investor faces a one-period portfolio selection problem
f
with a monthly horizon. The return on the risk-free asset is r t +1 and the excess return
on stocks, the risky asset, is r t +1 . Returns are continuously compounded. At time t
f
the risk-free rate r t +1 is known, while the excess stock return r t +1 is uncertain with
r t +1|t ∼ N (µt +1 ,σ2t +1 ). The investor has initial wealth of 1 to invest at every time t .
At time t the investor has to decide what share of wealth θt to invest in stocks. The
remaining wealth 1 − θt is held in the risk-free asset. The investor’s final wealth at
time t + 1 is
f
f
Wt +1 = θt exp(r t +1 + r t +1 ) + (1 − θt ) exp(r t +1 ).
(2.1)
The investor’s utility for wealth level W is given by constant relative risk aversion
(CRRA) utility,
W 1−γ
Uγ (W ) =
,
(2.2)
1−γ
with constant relative risk aversion coefficient γ > 1. As a function of investment and
excess return, the utility is
Uγ (θt ,r t +1 ) =
1
f
f
(θt exp(r t +1 + r t +1 ) + (1 − θt ) exp(r t +1 ))1−γ .
1−γ
(2.3)
2.2. I NVESTMENT AND C ONFIDENCE S ETS
37
In order to calculate expected utilities, the investor needs a model for r t +1 . Given a
model of conditional returns, the investor can maximize expected utility. The expected
utility from investing θt in stocks is
h
i
Et Uγ (θt ,r t +1 ) =
h
i
1
f
f
Et (θt exp(r t +1 + r t +1 ) + (1 − θt ) exp(r t +1 ))1−γ .
1−γ
The investor maximizes his expected utility in period t by investing
h
i
θt = arg max Et Uγ (θ,r t +1 ) ,
(2.4)
(2.5)
θ∈[0,1.5]
where we impose the standard restriction θt ∈ [0,1.5] of Campbell and Thompson
(2008) throughout our analysis.
The optimal portfolio in (2.5) requires a model for the conditional distribution
of returns. Given the high uncertainty regarding model specification of return predictability, the investor might be unable to specify a unique conditional model for
returns. The major challenge is to specify a model for the conditional mean. For some
conditioning set D i ,t , this conditional mean is given by
µi ,t +1 = Ei ,t [r t +1 ] = E[r t +1 |D i ,t ],
(2.6)
where i = 1, . . . ,I is one of I possible conditioning sets. The uncertainty regarding the
choice of D i ,t entails uncertainty regarding the expected utility (2.4) and thus the
optimal investment in (2.5).
Given the large uncertainty about the conditioning variables D i ,t , we want to
address the question whether return predictability can be exploited when the investor
is not willing to choose a particular set of conditioning variables, but rather maintains
multiple reasonable conditioning sets. We study investment strategies that work well
on different conditioning sets in the manner of Aiolfi and Favero (2005). The model
confidence set is used to identify reasonable conditioning sets, e.g., return models
that are not rejected by the data. Thus, we are interested in the conditional expected
utility for different conditioning variables, i.e.,
E[Uγ (θ,r t +1 )|D i ,t ] for i = 1, . . . ,I .
(2.7)
In the following, we measure the variation of expected utility over different reasonable
conditioning sets, and investigate the performance of strategies that are based on the
conditional expected utility for return models.
2.2.2 Expected Utility Confidence Sets
Assume there is a set Mt = {1, . . . ,m} of potential return prediction models, including
the unconditional HA model. Every model specifies a conditional density for return
r t +1 , and thus a conditional expectation. Let Et ,i be the conditional expectation under
38 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
model i ∈ Mt . For such a set of models Mt , we construct the model confidence set
(MCS) at every time t . We denote the MCS by Mt∗ , and by m t∗ = #Mt∗ the number of
models in the MCS at time t .
Loosely speaking, the MCS of Hansen et al. (2011) is a subset of the models,
Mt∗ ⊆ Mt , which contains the best model with 1−α confidence. The best model is the
one with highest expected utility in our setting, or equivalently the lowest expected
loss, where loss is defined as the negative of the utility. The MCS is constructed
using past observation on outcomes (returns) and past predictions (in our case,
optimal investments) from all models in Mt . The confidence level 1 − α controls
how strong the statistical evidence against a model needs to be in order to exclude it
from the MCS. The MCS approach captures statistical model uncertainty. The harder
it is to identify the best model, the more models are included in the MCS. If one
model performs significantly better than all its competitors, then it becomes the only
element of Mt∗ . Details on the implementation of the MCS approach are given in
section 2.3.3.
Based on model confidence set Mt∗ , we can construct a confidence set for expected utility from investment θt as
C t (θt ,Mt∗ ) = {Et ,i [Uγ (θt ,r t +1 )] : i ∈ Mt∗ }.
(2.8)
The confidence set C t (θt ,Mt∗ ) is a measure of uncertainty about expected utility for
investment θt . It contains the expected utility for all models that cannot be excluded
from the model confidence set. As the expected utility depends on investment θt , so
does the confidence set.
For easier interpretation, we transform expected utilities to the corresponding
certainty equivalent returns. The certainty equivalent return (CER) under model i at
time t for investment θt is
³
´1/(1−γ)
C E R i ,t (θt ) = (1 − γ)Et ,i [Uγ (θt ,r t +1 )]
− 1.
(2.9)
Calculating the CER for all elements of C t (θt ,Mt∗ ), we get a time t confidence set for
the CER of investment θt . In the following we focus on the CER confidence range,
defined as the highest and the lowest CER of all models in the model confidence set.
The confidence set and the confidence range presented above are tools to quantify
uncertainty regarding expected utility associated with a certain investment strategy. It
allows us to quantify uncertainty for a standard investor who uses the historical mean
to guide his investment decision. Beyond this, we are interested in characterizing
investment for which the uncertainty is lower, in a way that we shall discuss in the
next section.
2.2.3 Robust Investment
The confidence sets for expected utility are a function of investment θt . We use this
to explore how the investor needs to set investment in order to reduce uncertainty
2.2. I NVESTMENT AND C ONFIDENCE S ETS
39
about expected utility. Specifically, we construct a robust investment strategy that can
provide non-negative certainty equivalent returns for all conditional expectations
that are not rejected by the data, i.e., for all return prediction models in the model
confidence set. This robust investment is constructed by maximizing the lowest
expected utility over all models in the confidence set. In terms of CER confidence set,
the robust investment θtR ∈ [0,1.5] is given by
¡
¢
θtR = arg max min C t (θ,Mt∗ ) .
(2.10)
θ∈[0,1.5]
The robust investment leads to non-negative certainty equivalent returns under all
modes in the MCS.
The robust investment in equation (2.10) is a special version of maxmin investment. Maxmin investment rules have drawn some attention in the portfolio choice
literature (see, e.g., Epstein and Wang, 1994; Maenhout, 2004; Garlappi, Uppal, and
Wang, 2007). Maxmin rules reflect an extreme attitude toward model uncertainty,
i.e., they reflect model uncertainty aversion (see, e.g., Gilboa and Schmeidler, 1989;
Hansen and Sargent, 2001). Our robust investment strategy applies the maxmin rule
over the model confidence set, such that it can be interpreted as an investor who is
averse to uncertainty over the set of models that are not rejected by the data.
The robust investment (2.10) is very conservative. As a less conservative alternative that takes into account all models in the confidence set, we consider maximizing
average utility over the elements of the confidence set, i.e.,


X
1
Av g
Et ,i [Uγ (θt ,r t +1 )] .
(2.11)
θt
= arg max  ∗
θ∈[0,1.5] m t i ∈M ∗
t
Av g
(avg) investment θt
The averaging
does not ensure any properties of the expected
utility for the individual model in the confidence set. The number of models over
which the average is taken depends on the model confidence set.
As a third investment strategy based on the model confidence set, we use a majority strategy similar to the investment strategies considered in Aiolfi and Favero (2005).
Let n h,t be the number of models in the MCS for which the time t optimal investment
θi ,t is higher than θtH A , and n h,t the number of models with lower investment than
for the HA model. The majority investment is then given by

1 P

∗θ

1(θi ,t > θtH A ) for n h,t > n l ,t ,

 nh,t i ∈Mt i ,t
P
θtM = n1 i ∈M ∗ θi ,t 1(θi ,t < θtH A ) for n h,t < n l ,t ,
(2.12)
t
l ,t



θ H A
for n h,t = n l ,t ,
t
where 1(.) is the indicator function. This investment rule only invests below or above
the HA investment, if the majority of models in the MCS imply such an investment
decision. In contrast to the robust and averaging investment strategies, the majority
investment is a function of the investment decision of the models in the confidence
set rather than their conditional distributions.
40 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
2.3 Models and Data
This section discusses data, forecasting methods, estimation, and model confidence
set construction.
2.3.1 Variables and Data
A major contribution to the uncertainty in return predictability stems from uncertainty regarding which variables should be used as predictors. The benchmark model,
over which the investor wishes to improve expected utility, is the unconditional HA
model:
• Using no predictor variables gives the historical average (HA) model, which
specifies expected excess returns as a constant.
We consider predictors from the popular data set1 of Welch and Goyal (2008). Stock
returns are calculated from Center for Research in Security Prices (CRSP) data on the
S&P 500 index. We follow Welch and Goyal (2008) in the construction of the variables
from this data set. The 14 variables can be roughly grouped in four categories.
Predictor variables describing the state of the financial market are:
• long-term rate of return (ltr),
• the variance of stock returns computed from daily returns (vars),
• and the cross-section beta premium (csp) of Polk, Thompson, and Vuolteenaho
(2006).
The bond market and macroeconomic conditions are captured by the predictors:
• default yield spreads (dfy) measured by yield difference between AAA and
BAA-rated corporate bonds,
• term spread between long-term bond and Treasury bill yields (tms),
• default return spread (dfr) between long-term corporate bonds and long term
government bonds,
• long-term yields (ltr),
• and inflation (inf ).
The valuations ratios considered are:
• dividend-price ratio (dp),
• dividend yield (dy),
1 The data set is available from Amit Goyal’s homepage http://www.hec.unil.ch/agoyal/.
2.3. M ODELS AND D ATA
41
• 10-year moving average of earnings-price ratio (ep10),
• and the book-to-market ratio (bm).
Finally, we consider:
• dividend-earnings ratio (d/e) and
• ratio of 12-month net equity issues over end-of-year market capitalization
(ntis),
as predictors related to corporate finance decisions.
2.3.2 Return Prediction Models and Forecasts
In this next section we discuss the estimation and forecasting approach taken to
model the conditional distribution of returns.
For the conditional mean, linear models are considered. For each of the 14 variables, x v , v = 1, . . . ,14, we estimate a univariate regression model,
r t +1 = c v + β0v x v,t + ²v,t +1 ,
(2.13)
where c v is the intercept, βv is the slope parameter, and ²v,t +1 are zero mean error
terms. For the HA model, equation (2.13) only features a constant and no predictors.
Using data up to time t , we get the least-squares estimates ĉ v,t and β̂v,t using an
expanding estimation window. From this estimated model, we get the conditional
mean forecast,
µ̂v,t +1 = ĉ v,t + β̂Rv,t x v,t ,
(2.14)
for the univariate predictive regression with variable x v . For the individual predictors, we use the sign restrictions of Campbell and Thompson (2008) on the slope
coefficients, such that forecasts are obtained for β̂Rv,t = max(0,β̂v,t ), where β̂v,t is the
unrestricted estimate.
In addition to the individual predictors, we consider return models that use the
information in all the predictors. In the context of return prediction, multivariate
least-squares regression is known to produce noisy estimates, and very poor out-ofsample performance (see, e.g., the kitchen sink model in Welch and Goyal, 2008). To
make such models produce accurate forecasts, we need to reduce the dimensionality
or the estimation variance.
We apply two multivariate approaches. First, we use a linear principal components model. The first 1 to 3 principal components, extracted from all 14 predictors,
are used as variables in a regression model. The resulting models are called PC1, PC2,
and PC3, and are estimated by least-squares. Second, we use the complete subset
regression of Elliott et al. (2013), which is a forecast combination approach that has
been shown to be successful in stock return prediction. The complete subset regression combines all possible models that contain k of the 14 predictors. We consider
42 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
k = 1, . . . ,5 with corresponding models labelled CSR1, . . . , CSR5. For k = 1, the complete subset regression corresponds to the combination of univariate regressions
applied in Rapach et al. (2010).
The univariate regressions, together with the principal components and complete
subset regression models, give us a total of 23 candidate models. For all models, we
impose a non-negative equity premium forecast, such that the final forecast for the
conditional mean is max(0,µ̂i ,t +1 ) where µ̂i ,t +1 is the forecast from model i .
The conditional variances σ2t +1 are computed from model-based residuals using
a ten year rolling window of monthly returns. We are mainly concerned with model
uncertainty, i.e., which variables to condition on, but it is also relevant to account for
parameter estimation uncertainty. We account for estimation uncertainty by adjusting the conditional variance for uncertainty about the coefficients of the conditional
mean model. The adjusted conditional variance is given by
Var(r t +1 |D i ,t ) = E[Var{r t +1 |c i ,βi ,D i ,t }|D i ,t ] + Var{E[r t +1 |c i ,βi ,D i ,t ]|D i ,t },
(2.15)
where the first term is estimated from the model-based residual, and the second term
captures the estimation variance, see Pástor and Stambaugh (2012) for a detailed
discussion. An estimate of this estimation variance is obtained from the asymptotic
covariance matrix of the parameters in each of the considered conditional mean
models.
2.3.3 Model Confidence Set Construction
Next, we discuss the exact implementation of the model confidence set (MCS) procedure of Hansen et al. (2011) used in this chapter. The MCS is a subset of Mt that
contains the best model with 1 − α confidence level. The best model is the one with
lowest expected loss for a given loss function. The MCS at time t is denoted by Mt∗ ,
suppressing the dependence on the confidence level 1−α. To construct Mt∗ , a sample
of E losses up to time t for each model in Mt is needed. The MCS algorithm uses
sequential testing of equal predictive ability. At every step of the sequential testing,
critical values are obtained using a moving-block bootstrap, and one model is eliminated from the confidence set until the null hypothesis of equal predictive ability
(EPA) is not rejected. The tests for EPA are based on the max statistic and the max
elimination rule is used in each elimination step. This statistic and elimination rule
are based on the maximum average t -statistic, where for each model the average is
taken over the t -statistics from all pairwise loss differentials, see Hansen et al. (2011)
for details. The critical values are obtained from 1999 bootstrap replications using
a block bootstrap.2 Based on this sequential testing, a p-value for each model is
2 The np package (see Hayfield and Racine, 2008) for R (see R Core Team, 2013) is used to find the
optimal block length for the block bootstrap using the methods of Politis and White (2004) and Patton et al.
(2009).
2.4. E MPIRICAL R ESULTS
43
obtained. These p-values tell us whether a certain model is member of the MCS for a
given confidence level.
The investor’s relevant loss function, here taken as the negative of his realized
CRRA utility, is used to obtain the sample of E losses for each model. Results from
forecast comparison for models of financial returns and volatility depend on the
loss function and can, e.g., differ between utility-based and statistical loss functions
(see, e.g., West, Edison, and Cho, 1993; Gonzàlez-Rivera, Lee, and Mishra, 2004;
Skouras, 2007; Cenesizoglu and Timmermann, 2012). The investor’s realized losses
are based on forecasts, and thus cannot be computed from the beginning of the
available sample. We therefore reserve the first M observations for initial parameter
estimation, such that when we have a sample of N observations at time t , the MCS is
based on E = N − M losses:
M
E
z
}|
{z
}|
{
t − N + 1, . . . ,t − E ,t − E + 1, . . . ,t − 1,t .
|
{z
}
N
Later in the sample, more data are available to construct the MCS. A larger sample
will give the MCS more power to exclude models. If, however, the performance of
models varies over time, having a longer history of past losses is not necessarily more
informative regarding expected performance.
2.4 Empirical Results
Our sample spans over the period 1946:1 to 2002:12 (N = 684). All variables are at
monthly frequency. The first 120 observations are reserved for initial estimation (M =
120). Another 120 observations are used for construction of the first model confidence
set. The model confidence sets are constructed using an expanding window. Thus,
the out-of-sample period, for which we observe investments and confidence sets, is
1966:01–2002:12 (444 observations). All results in this section are for an investor with
risk aversion parameter γ = 5.
Return predictability appears to interact strongly which the business cycle. There
is evidence that expected excess returns are higher during recessions (see, e.g., Fama
and French, 1989; Henkel, Martin, and Nardari, 2011). We therefore identify NBER
recessions in the results.
Before looking at the confidence sets, we evaluate the performance of the 23 return prediction models in our model universe. For this purpose, the out-of-sample R 2
( OOS-R 2 ), Sharpe ratio (SR), and certainty equivalent return relative to the historical
average investment (∆C E R i ) are computed for each return prediction model. For
44 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
model i , the OOS-R 2 is defined as:
R i2
1
S
= 1−
1
S
S
P
t =1
S
P
t =1
(µ̂i ,t − r t )2
,
(2.16)
(µ̂tH A − r t )2
where µ̂tH A is the historical average and S = 564 is the number of out-of-sample
observations. The OOS-R 2 measures the statistical accuracy of the conditional mean
forecast. The Sharpe ratios are calculated from the realized excess return for the
optimal investment for each model. The ∆C E R i is given by:
Ã
S
1X
Uγ (θi ,t ,r t +1 )
∆C E R i = (1 − γ)
S t =1
!1/(1−γ)
Ã
S
1X
− (1 − γ)
Uγ (θtH A ,r t +1 )
S t =1
!1/(1−γ)
,
(2.17)
where θtH A is the time t investment based on the historical average and θi ,t is the
investment for model i . Return prediction models that lead to economic gains for
investors have a positive ∆C E R i .
Table 2.1 summarizes the forecasting performance for the individual predictors
and the multivariate forecasting strategies. We see that many individual predictors
perform poorly out-of-sample in statistical terms, but most have higher certainty
equivalent returns than the HA model. In terms of certainty equivalent returns, the investments based on the complete subset regressions perform best. These models also
achieve the highest out-of-sample R 2 . The HA investment is rejected by the model
confidence, such that we conclude that return predictability leads to improvements
in the unconditional expected utility for investors. The variables csp, tms, and infl
are the only univariate models that are in the model confidence set for the 90% confidence level. Five models have a negative out-of-sample R 2 , but a higher certainty
equivalent return than the historical average model. Such a result is not unusual, as
Cenesizoglu and Timmermann (2012) find that statistical and economic measures of
returns predictability are only very weakly correlated.
2.4.1 Model Confidence Sets and Investment
First we look at the evidence of real-time model uncertainty by computing series
of model confidence sets for different confidence levels 1 − α. For every month in
the out-of-sample period, Figures 2.1 to 2.3 show which models are included in
the MCS. For α = 0.05, and thus a confidence level of 0.95, we find that the model
confidence sets change substantially over the sample 1966:01–2002:12. Up to the
mid-1970s, all 23 models are included in the model confidence set every month.
Subsequently, some models, notably the HA model, are excluded from the MCS. In
the early 1980s, more than half of all models are excluded. After 1995, a number of
models make a reappearance in the MCS. The time variation in the MCS can either be
2.4. E MPIRICAL R ESULTS
45
Table 2.1. Out-of-sample performance of the historical average (HA) model, the 14 predictor
variables, the complete subset regresssions (CSR1, . . . , CSR5), and the principal component
models (PC1, PC2, and PC3). Columns 2 to 5 show mean, minimum, maximum and variance
of the conditional mean forecasts. Out-of-sample R 2 (OOS-R2 ) and certainty equivalent return
difference to HA investment (∆CER) are reported in percentage points for an investor with risk
aversion γ = 5. Sharpe ratios (SR) are calculated based on excess returns. Model confidence
set p-values (MCS-p) are based on the CRRA loss function in (2.2) with γ = 5. Sample period
1956:01–2002:12 (564 observations).
HA
dp
dy
ep10
bm
ltr
svar
csp
ntis
de
dfy
tms
infl
lty
dfr
CSR1
CSR2
CSR3
CSR4
CSR5
PC1
PC2
PC3
mean
0.655
0.279
0.278
0.568
0.412
0.788
0.693
0.290
0.827
1.062
0.796
0.854
0.652
0.283
0.651
0.527
0.456
0.421
0.412
0.418
0.270
0.304
0.513
max
1.193
1.215
1.325
2.400
1.544
5.717
1.510
1.162
2.223
2.143
2.420
2.751
1.837
1.006
2.683
1.277
1.550
1.797
2.022
2.455
0.759
1.442
4.170
min
0.416
0.000
0.000
0.014
0.141
0.000
0.424
0.000
0.000
0.464
0.293
0.000
0.000
0.000
0.000
0.018
0.000
0.000
0.000
0.000
0.000
0.000
0.000
var
0.037
0.121
0.126
0.202
0.053
0.525
0.027
0.079
0.221
0.107
0.138
0.391
0.160
0.070
0.138
0.034
0.059
0.102
0.157
0.224
0.032
0.099
0.446
OOS-R2
0.000
0.700
0.739
-0.824
0.258
-0.724
-0.139
0.716
-0.062
-2.048
-0.628
-0.112
1.261
0.695
-0.721
1.121
1.847
2.223
2.439
2.494
0.700
0.284
-1.152
∆CER
0.069
0.103
0.072
0.108
0.167
0.031
0.168
-0.003
-0.254
-0.022
0.127
0.239
0.162
0.000
0.208
0.313
0.362
0.373
0.370
0.167
0.124
0.142
SR
0.087
0.077
0.084
0.087
0.085
0.131
0.099
0.100
0.113
0.098
0.097
0.141
0.146
0.100
0.086
0.127
0.155
0.168
0.171
0.169
0.100
0.082
0.106
MCS-p
0.006
0.030
0.030
0.030
0.030
0.064
0.030
0.182
0.030
0.030
0.030
0.275
0.275
0.030
0.030
0.030
0.363
0.804
1.000
0.834
0.030
0.030
0.030
due to increased power as the sample size grows, or be caused by changes in relative
performance of the models. The fact that models, that have been previously excluded,
later reenter the MCS is an indication of time-variation in predictive content of the
variables. Alternatively, higher variance in the loss series might cause the MCS to
retain more models.
For the lower confidence level of 0.90 in Figure 2.2, the MCS contains fewer
models by construction. The variation over time is very similar to the 0.95 confidence
level, and the MCS contains a similar number of models most of the time. For a 0.99
confidence level in Figure 2.3, the MCS retains a much larger number of models in all
46 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
PC3
PC2
PC1
CSR5
CSR4
CSR3
CSR2
CSR1
dfr
lty
infl
tms
dfy
de
ntis
csp
svar
ltr
bm
ep10
dy
dp
HA
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1970
1980
1990
2000
Figure 2.1. Inclusion in model confidence set (MCS) for α = 0.05. A dot indicates that the model
is included in the real-time MCS for this month. Loss is based on risk aversion of γ = 5. Dashed
red lines indicate start- and end-dates of NBER recessions. Sample period is 1966:1–2002:12.
months. Only the HA model is consistently excluded after the early 1980s.
The model confidence sets suggest that real-time model uncertainty is high, and
that investors cannot identify the single best model based on past performance, as
the MCS always contains more than one model. However, the HA model is excluded
from the MCS in that latter part of the sample for all three α. Thus, while there is
statistical uncertainty about the best model, the evidence for return predictability is
rather strong.
2.4. E MPIRICAL R ESULTS
PC3
PC2
PC1
CSR5
CSR4
CSR3
CSR2
CSR1
dfr
lty
infl
tms
dfy
de
ntis
csp
svar
ltr
bm
ep10
dy
dp
HA
47
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●●
●
●
●
●
●●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●● ●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
1970
1980
1990
2000
Figure 2.2. Inclusion in model confidence set (MCS) for α = 0.10. A dot indicates that the model
is included in the real-time MCS for this month. Loss is based on risk aversion of γ = 5. Dashed
red lines indicate start- and end-dates of NBER recessions. Sample period is 1966:1–2002:12.
PC3
PC2
PC1
CSR5
CSR4
CSR3
CSR2
CSR1
dfr
lty
infl
tms
dfy
de
ntis
csp
svar
ltr
bm
ep10
dy
dp
HA
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
1970
1980
1990
2000
Figure 2.3. Inclusion in model confidence set (MCS) for α = 0.01. A dot indicates that the model
is included in the real-time MCS for this month. Loss is based on risk aversion of γ = 5. Dashed
red lines indicate start- and end-dates of NBER recessions. Sample period is 1966:1–2002:12.
48 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
Figure 2.4 shows the series of investments in stocks for the HA model and the
three investment strategies based on the model confidence set. The HA investment
series is most stable. In the beginning of the sample, the upper limit on investment
is binding. The robust investment is substantially lower than the HA investment
and flat throughout most of the first half of the sample. The averaging and majority
investment always allocate a significant share of wealth to stocks, and both series
follow on similar paths.
Figure 2.5 shows confidence ranges for certainty equivalent returns that are constructed from model confidence sets and investment series, i.e., the CER confidence
range is spanned by the lowest and highest certainty equivalent returns from models
in the model confidence sets. For HA investment, the width of the confidence range
varies over time. In the beginning of the sample, the lower bound is below the riskfree rate. In the second half of the sample, the confidence ranges become narrower,
and the lowest CER occasionally lies above the risk-free rate. The CER confidence
ranges for the robust investment look quite differently, reflecting the behavior of the
robust investment series. In the beginning of the sample, the robust investment rule
allocates a very small share of wealth to stocks, such that the CER confidence ranges
are narrow. When the model uncertainty is reduced and the robust investment rule
leads to higher stock holdings, the width of CER confidence range increases, but
by construction the lowest element is never below the risk-free rate. The evolution
of the CER confidence ranges for the averaging and majority investment rules are
qualitatively similar to the one for HA investment, and both become narrower in the
second half of the sample.
Table 2.2 presents of the out-of-sample performance, as measured by Sharpe ratio
and certainty equivalent return, for the investment strategies based on the model
confidence sets and the multivariate models. Over the full sample, all investment
strategies outperform the historical average model both in terms of Sharpe ration and
CER. Thus, we find strong evidence that return predictability benefits investors even
after taking model uncertainty into account. Among our MCS-based strategies, the
majority investment performs best for all three choices of α. The CER improvements
in investment performance are present both in recessions and outside of recessions,
but are substantially larger in magnitude during recessions.
To test whether the investment strategies significantly outperform the HA model,
we perform pair-wise Diebold and Mariano (1995) tests using the CRRA utility as
loss function. The Bonferroni correction is applied to conservatively account for
distortions from multiple testing. The averaging and majority investment rules significantly outperform the HA model, while the gains for the robust investment rule
are not significant at the 10% level. For the two subsamples, we only find statistically
significant outperformance of the averaging and majority investment rules during
recessions.
The empirical findings can be summarized as follows. The model confidence sets
2.4. E MPIRICAL R ESULTS
49
(a) HA investment:
1.50
1.25
1.00
0.75
0.50
1970
1980
1990
2000
(b) MCS robust investment:
1.00
0.75
0.50
0.25
1970
1980
1990
2000
(c) MCS averaging investment:
1.2
0.8
0.4
1970
1980
1990
2000
(d) MCS majority investment:
1.2
0.8
0.4
1970
1980
1990
2000
Figure 2.4. Optimal investments in stocks, θt , for model historical average (HA) model, robust investment, averaging investment, and majority investment. Risk aversion γ = 5 and
model confidence sets with α = 0.05. Dashed red lines indicate start- and end-dates of NBER
recessions.
50 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
Table 2.2. Sharpe ratios (SR) and certainty equivalent return relative to the historical average
investment (∆C E R). p-value of Diebold-Mariano test (p) for expected utility equal to HA
investment. Rejection of the null hypothesis after Bonferroni correction are indicates as: * for
10% level, ** for 5% level, and *** for 1% level. Sample period 1966:01–2002:12 with total of 444
observations, of which 65 are during recessions.
Full Sample
HA
Robust
Robust
Robust
Avg
Avg
Avg
Majority
Majority
Majority
PC1
PC2
PC3
CSR1
CSR2
CSR3
CSR4
CSR5
α
0.01
0.05
0.10
0.01
0.05
0.10
0.01
0.05
0.10
-
SR
0.048
0.083
0.076
0.060
0.115
0.122
0.119
0.133
0.135
0.136
0.073
0.058
0.087
0.108
0.147
0.165
0.168
0.168
∆C E R
0.212
0.232
0.213
0.291
0.298
0.291
0.364
0.369
0.371
0.228
0.193
0.190
0.271
0.402
0.463
0.478
0.482
p
0.216
0.156
0.187
0.000***
0.000***
0.001**
0.002**
0.002**
0.002**
0.047
0.155
0.185
0.000***
0.000***
0.001***
0.002**
0.002**
Recession
SR
−0.097
−0.024
0.072
0.036
0.064
0.101
0.121
0.155
0.188
0.214
−0.013
−0.053
0.061
0.057
0.190
0.244
0.252
0.254
∆C E R
0.917
0.988
0.946
0.804
0.939
1.031
1.205
1.340
1.446
0.846
0.749
0.827
0.854
1.328
1.553
1.609
1.645
p
0.085
0.057
0.064
0.002**
0.001**
0.001**
0.002**
0.001**
0.000***
0.043
0.076
0.068
0.005*
0.002**
0.002**
0.002**
0.002**
No Recession
SR
0.080
0.107
0.077
0.065
0.128
0.127
0.120
0.129
0.124
0.119
0.087
0.076
0.092
0.118
0.139
0.148
0.150
0.149
∆C E R
0.089
0.100
0.086
0.202
0.186
0.162
0.219
0.202
0.187
0.120
0.096
0.079
0.169
0.242
0.275
0.284
0.283
p
0.617
0.554
0.609
0.011
0.031
0.075
0.063
0.094
0.129
0.288
0.498
0.596
0.013
0.023
0.042
0.064
0.080
vary substantially over time and always contain multiple models. The uncertainty,
measured by width of the confidence range, is substantial in economic terms and
varies over time. Confidence ranges for expected utility for HA investment frequently
contain certainty equivalent returns below the risk-free rate. It is, however, possible to
devise investment strategies that reduce model uncertainty and still lead to economic
gains from return predictability.
2.4. E MPIRICAL R ESULTS
51
(a) CER confidence range for HA investment:
4
3
2
1
0
1970
1980
1990
2000
(b) CER confidence range for robust investment:
2.0
1.5
1.0
0.5
1970
1980
1990
2000
(c) CER confidence range for averaging investment:
8
6
4
2
0
1970
1980
1990
2000
(d) CER confidence range for majority investment:
7.5
5.0
2.5
0.0
1970
1980
Lowest CER
1990
Highest CER
2000
Risk free Rate
Figure 2.5. Certainty equivalent returns (CER) confidence ranges for α = 0.05. CER and risk-free
rate in percentage returns.
52 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
2.4.2 Impact of Risk Aversion
To assess the sensitivity to changes in the specification of the investor, we repeat the
empirical analysis for different values of the risk aversion parameter γ. Changing γ
affects the optimal investment, realized excess returns, and thus the realized utilities.
As a consequence, the MCS and CER confidence ranges can be affected trough
changes in the loss series.
In Figure 2.6, the results for a less risk averse investor with γ = 2 are presented. This
change of risk aversion causes dramatic changes in the results. As we see from Panel
(a), the MCS includes all the models in every month in our sample. Panel (b) gives
insight as to why this happens. The lower risk aversion increases optimal investments
to the extent that the upper bound of holding 150% of wealth in stocks is binding
throughout large parts of the sample. This is true also for the other models, and
therefore the investment decision is identical for many models for most observations.
Identical investment leads to identical loss, such that the model confidence sets
cannot distinguish between the models. The increased statistical model uncertainty
leads to CER confidence range that always include a CER below the risk-free rate in
Panel (c).
For a higher risk aversion of γ = 10, shown in Figure 2.7, the model confidence
sets remain similar to the ones for γ = 5. In Panel (b) we see that the constraints of
the investment are never binding. The dynamics of the CER confidence ranges for
HA investment do not change qualitatively.
Table 2.3 summarizes investment performance under the two alternative risk
aversion coefficients γ = 2 and γ = 10. The increased uncertainty for γ = 2 has a
strong effect on the performance of the robust investment, while the remaining
investment strategies are not effected so dramatically. Because of the high model
uncertainty for γ = 2, the robust investment strategy is not able to produce significant
economic gains from return prediction. For γ = 10, where the model confidence sets
remain similar to γ = 5, economic gains are observed for all the investment strategies
based on the MCS. Thus, being able to narrow down the set of models using the
model confidence set appears to be a crucial ingredient for the success of the robust
investment strategy.
From changing the risk aversion, we have learned that the results change if constraints are binding frequently, because it makes it impossible to distinguish different
return prediction models in terms of economic investment performance. When no
such effects are present, which is the case when we increase risk aversion to 10, the
findings remain qualitatively unchanged.
2.4. E MPIRICAL R ESULTS
53
(a) Model inclusion:
PC3
PC2
PC1
CSR5
CSR4
CSR3
CSR2
CSR1
dfr
lty
infl
tms
dfy
de
ntis
csp
svar
ltr
bm
ep10
dy
dp
HA
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1970
1980
1990
2000
(b) HA investments:
1.5
1.4
1.3
1.2
1970
1980
1990
2000
(c) CER confidence ranges:
7.5
5.0
2.5
0.0
1970
1980
Lowest CER
1990
Highest CER
2000
Risk free Rate
Figure 2.6. Results for lower risk aversion, γ = 2. The panels show (a) inclusion in model
confidence set for α = 0.05, (b) investment based on historical average (HA) , and (c) lowest
and highest element of CER confidence set (CER confidence ranges) along with risk-free rate
in percentage returns. Dashed red lines indicate start- and end-dates of NBER recessions.
54 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
(a) Model inclusion:
PC3
PC2
PC1
CSR5
CSR4
CSR3
CSR2
CSR1
dfr
lty
infl
tms
dfy
de
ntis
csp
svar
ltr
bm
ep10
dy
dp
HA
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●● ●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●
●
●
●
●
●●
● ●
●● ●
●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1970
1980
1990
2000
(b) HA investments:
0.8
0.6
0.4
1970
1980
1990
2000
(c) CER confidence ranges:
2
1
0
1970
1980
Lowest CER
1990
Highest CER
2000
Risk free Rate
Figure 2.7. Results for higher risk aversion, γ = 10. The panels show (a) inclusion in model
confidence set for α = 0.05, (b) investment based on historical average (HA), and (c) lowest
and highest element of CER confidence set (CER confidence ranges) along with risk-free rate
in percentage returns. Dashed red lines indicate start- and end-dates of NBER recessions.
2.4. E MPIRICAL R ESULTS
55
Table 2.3. Sharpe ratios (SR) and certainty equivalent return relative to the historical average
investment ( ∆C E R) for different investment strategies for γ = 2 and γ = 10. Robust, Avg,
and Majority investment rules are based on MCS with α = 0.05. p-value of Diebold-Mariano
test (p) for null hypothesis that expected loss is equal to expected loss from HA investment.
Rejection of the null hypothesis after Bonferroni correction are indicates as: * for 10% level, **
for 5% level, and *** for 1% level. The Bonferroni correction is based on the number of tests
conducted in each panel for the same subsample. Sample period 1966:01–2002:12. Based on
444 observations, of which 65 are during recessions.
Full Sample
(a) γ = 2
HA
Robust
Avg
Majority
CSR1
CSR2
CSR3
CSR4
CSR5
PC1
PC2
PC3
(b) γ = 10
HA
Robust
Avg
Majority
CSR1
CSR2
CSR3
CSR4
CSR5
PC1
PC2
PC3
Recession
No Recession
SR
∆C E R
p
SR
∆C E R
p
SR
∆C E R
p
0.080
0.085
0.090
0.120
0.096
0.121
0.137
0.145
0.148
0.080
0.068
0.099
0.006
0.079
0.272
0.117
0.283
0.358
0.386
0.394
0.075
0.036
0.167
0.981
0.004*
0.045
0.076
0.019
0.031
0.047
0.056
0.648
0.863
0.400
−0.046
−0.024
−0.009
0.086
0.016
0.120
0.159
0.163
0.174
−0.013
−0.074
−0.011
1.057
0.323
1.295
0.685
1.498
1.730
1.754
1.816
0.881
0.576
0.793
0.238
0.004**
0.014
0.085
0.014
0.011
0.013
0.012
0.188
0.390
0.246
0.111
0.110
0.116
0.129
0.113
0.121
0.132
0.141
0.143
0.096
0.098
0.122
−0.176
0.037
0.094
0.017
0.073
0.121
0.149
0.149
−0.065
−0.059
0.058
0.530
0.158
0.467
0.608
0.442
0.434
0.439
0.474
0.676
0.782
0.773
0.045
0.065
0.114
0.143
0.108
0.148
0.166
0.172
0.174
0.073
0.063
0.076
0.111
0.146
0.202
0.140
0.206
0.239
0.255
0.261
0.119
0.106
-0.011
0.184
0.001***
0.002**
0.000***
0.000***
0.001***
0.001**
0.003**
0.040
0.119
0.924
−0.097
0.017
0.078
0.197
0.057
0.189
0.235
0.247
0.258
−0.013
−0.053
0.073
0.466
0.429
0.692
0.425
0.659
0.769
0.826
0.883
0.420
0.373
0.401
0.073
0.002**
0.001**
0.005*
0.003**
0.002**
0.003**
0.003**
0.044
0.078
0.108
0.077
0.074
0.123
0.132
0.118
0.139
0.151
0.156
0.155
0.087
0.081
0.077
0.049
0.097
0.118
0.091
0.128
0.148
0.158
0.156
0.067
0.060
−0.082
0.569
0.026
0.071
0.009
0.018
0.032
0.052
0.084
0.246
0.400
0.533
56 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
2.5 Conclusion
We have used the model confidence set (MCS) approach of Hansen et al. (2011) to
measure and describe the model uncertainty in return predictability over the sample
period 1966:01–2002:12. For the universe of 23 models considered in this chapter,
the model uncertainty is substantial both in statistical and economic terms. Model
confidence sets change substantially over time and contain less models in the second
half of our sample. Investors are exposed to large model uncertainty in the sense that
for different return models, that all cannot be rejected by the data, the conditional
expected utility from investment is very different.
We have proposed three investment strategies based on model confidence sets
that account for, and reduce, the model uncertainty. All three investment strategies
lead to economic gains from using the predictor variables in the data set of Welch and
Goyal (2008). In particular, we show that a robust investment rule, that is designed
to perform well under all models in the MCS, produces economic gains from return
predictability. Reducing the model uncertainty with this robust investment strategy,
requires lower investments in stocks compared to investments based on expected
return forecasts from historical averages. In the first half of the sample, the stock
investment for the robust strategy is very low, but it increases with lower model
uncertainty in the second half of the sample. For the robust investment, it is crucial to
narrow down the set of candidate models using the MCS. When this is not possible, as
it is the case for investors with low risk aversion, for which the investment constraints
are binding for many models, the robust strategies perform poorly.
2.6. R EFERENCES
57
2.6 References
Aiolfi, M., Favero, C., 2005. Model uncertainty, thick modelling and the predictability
of stock returns. Journal of Forecasting 24 (4), 233–254.
Ang, A., Bekaert, G., 2007. Stock return predictability: Is it there? Review of Financial
Studies 20 (3), 651–707.
Avramov, D., 2002. Stock return predictability and model uncertainty. Journal of
Financial Economics 64 (3), 423–458.
Barberis, N., 2000. Investing for the long run when returns are predictable. The Journal
of Finance 55 (1), 225–264.
Campbell, J., Thompson, S., 2008. Predicting excess stock returns out of sample: Can
anything beat the historical average? Review of Financial Studies 21 (4), 1509–1531.
Cenesizoglu, T., Timmermann, A., 2012. Do return prediction models add economic
value? Journal of Banking & Finance.
Cremers, K., 2002. Stock return predictability: A bayesian model selection perspective.
Review of Financial Studies 15 (4), 1223–1249.
Dangl, T., Halling, M., 2012. Predictive regressions with time-varying coefficients.
Journal of Financial Economics 106 (1), 157–181.
Diebold, F. X., Mariano, R. S., July 1995. Comparing predictive accuracy. Journal of
Business & Economic Statistics 13 (3), 253–63.
Elliott, G., Gargano, A., Timmermann, A., 2013. Complete subset regressions. Journal
of Econometrics.
Epstein, L. G., Wang, T., March 1994. Intertemporal asset pricing under knightian
uncertainty. Econometrica 62 (2), 283–322.
Fama, E., French, K., 1988. Dividend yields and expected stock returns. Journal of
Financial Economics 22 (1), 3–25.
Fama, E., French, K., 1989. Business conditions and expected returns on stocks and
bonds. Journal of Financial Economics 25 (1), 23–49.
Garlappi, L., Uppal, R., Wang, T., 2007. Portfolio selection with parameter and model
uncertainty: A multi-prior approach. Review of Financial Studies 20 (1), 41–81.
Gilboa, I., Schmeidler, D., 1989. Maxmin expected utility with non-unique prior.
Journal of Mathematical Economics 18 (2), 141–153.
58 C HAPTER 2. R ETURN P REDICTABILITY, M ODEL U NCERTAINTY, AND R OBUST I NVESTMENT
Gonzàlez-Rivera, G., Lee, T.-H., Mishra, S., 2004. Forecasting volatility: A reality check
based on option pricing, utility function, value-at-risk, and predictive likelihood.
International Journal of Forecasting 20 (4), 629–645.
Hansen, L. P., Sargent, T. J., 2001. Robust control and model uncertainty. The American
Economic Review 91 (2), 60–66.
Hansen, P. R., Lunde, A., Nason, J. M., 2011. The model confidence set. Econometrica
79 (2), 453–497.
Hayfield, T., Racine, J. S., 2008. Nonparametric econometrics: The np package. Journal
of Statistical Software 27 (5).
URL http://www.jstatsoft.org/v27/i05/
Henkel, S., Martin, J., Nardari, F., 2011. Time-varying short-horizon predictability.
Journal of Financial Economics 99 (3), 560–580.
Kandel, S., Stambaugh, R., 1996. On the predictability of stock returns: An assetallocation perspective. The Journal of Finance 51 (2), 385–424.
Lettau, M., Ludvigson, S., 2010. Measuring and Modeling Variation in the Risk- Return
Tradeoff, Handbook of Financial Econometrics. Vol. 1. Elsevier Science B.V., North
Holland, Amsterdam, pp. 617–690.
Lewellen, J., 2004. Predicting returns with financial ratios. Journal of Financial Economics 74 (2), 209–235.
Maenhout, P. J., 2004. Robust portfolio rules and asset pricing. The Review of Financial
Studies 17 (4), 951–983.
Pástor, L., Stambaugh, R. F., 2012. Are stocks really less volatile in the long run? The
Journal of Finance 67 (2), 431–478.
Patton, A., Politis, D. N., White, H., 2009. Correction to "automatic block-length
selection for the dependent bootstrap" by d. politis and h. white. Econometric
Reviews 28 (4), 372–375.
Pesaran, M. H., Timmermann, A., 1995. Predictability of stock returns: Robustness
and economic significance. The Journal of Finance 50 (4), pp. 1201–1228.
Politis, D. N., White, H., 2004. Automatic block-length selection for the dependent
bootstrap. Econometric Reviews 23 (1), 53–70.
Polk, C., Thompson, S., Vuolteenaho, T., 2006. Cross-sectional forecasts of the equity
premium. Journal of Financial Economics 81 (1), 101–141.
2.6. R EFERENCES
59
R Core Team, 2013. R: A Language and Environment for Statistical Computing. R
Foundation for Statistical Computing, Vienna, Austria.
URL http://www.R-project.org/
Rapach, D., Strauss, J., Zhou, G., 2010. Out-of-sample equity premium prediction:
Combination forecasts and links to the real economy. Review of Financial Studies
23 (2), 821–862.
Skouras, S., April 2007. Decisionmetrics: A decision-based approach to econometric
modelling. Journal of Econometrics 137 (2), 414–440.
Timmermann, A., 2008. Elusive return predictability. International Journal of Forecasting 24 (1), 1–18.
Wachter, J., Warusawitharana, M., 2009. Predictable returns and asset allocation:
Should a skeptical investor time the market? Journal of Econometrics 148 (2),
162–178.
Welch, I., Goyal, A., 2008. A comprehensive look at the empirical performance of
equity premium prediction. Review of Financial Studies 21 (4), 1455–1508.
West, K. D., Edison, H. J., Cho, D., 1993. A utility-based comparison of some models
of exchange rate volatility. Journal of International Economics 35 (1-2), 23–45.
CHAPTER
3
F REQUENCY D EPENDENCE IN THE
R ISK -R ETURN R ELATION
Bent Jesper Christensen and Manuel Lukas
Aarhus University and CREATES
Abstract
The risk-return trade-off is typically specified as a linear relation between the conditional mean and conditional variance of returns on financial assets. In this chapter
we analyze frequency dependence in the risk-return relation using a band spectral
regression approach that is robust to contemporaneous leverage and feedback effects.
For daily returns and realized variances from high-frequency data on the S&P 500
from 1995 to 2012 we strongly reject the null of no frequency dependence. Although
the risk-return relation is positive on average over all frequencies, we find a large and
statistically significant negative coefficient for periods of around one week. Subsample analysis reveals that the negative effect at the higher frequency is not statistically
significant before the financial crisis, but very strong after July 2007. Accounting for
the frequency dependence in the risk-return relation improves the forecasting of
stock returns after 2007.
Keywords: Risk-Return Relation, Band Spectral Regression, Realized Variance, Leverage
Effect.
61
62
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
3.1 Introduction
Financial theory predicts that investors need to be compensated for taking on greater
risks through higher expected returns, such that the conditional mean and conditional variance of stock market returns are positively related. Empirical estimates
of the risk-return relation are abundant in the literature (see Lettau and Ludvigson,
2010, for an extensive survey). The risk-return relation is typically specified as a linear relation between stock returns and some measure of the conditional variance,
motivated by the Merton (1973) intertemporal capital asset pricing model. Although
the linear specification is predominant in the literature, Rossi and Timmermann
(2011) find that the relation between conditional mean and conditional variance is
distinctly non-linear and even non-monotonic. The risk-return relation is also found
to be non-linear by Christensen, Dahl, and Iglesias (2012) who use a semi-parametric
estimation approach. The non-linearity also shows up in a frequency domain analysis.
Bollerslev, Osterrieder, Sizova, and Tauchen (2013) get different slope coefficients
in the risk-return relation when re-estimating it using different frequencies, which
would not have been seen in case of a linear relation between conditional mean and
conditional variance.
In this chapter, we estimate the risk-return relation by band spectral regression in
order to explicitly allow for a frequency-dependent relation between the conditional
mean and conditional variance of stock market returns. If regression coefficients
depend on the frequencies used, the band spectral regression model is a natural
means of allowing the simultaneous presence of different frequency-dependent
slopes within the same risk-return model. Contrary to linear models of the riskreturn relation, the frequency-dependent model allows the impact on the conditional
mean return from movements in conditional variance to depend on whether these
movements are short-lived or persistent
An empirical measure of conditional variance is needed for estimation of the riskreturn relation. Already Merton (1980) regressed returns on sample variances of intraperiod returns covering the same interval. This early work used daily returns for the
intra-period calculation and monthly variances and returns for the regression. With
the advent of high-frequency data, it has become possible to calculate daily variance
measures from intra-daily returns, and consider daily level regressions. Although
much recent research (e.g., Bollerslev and Zhou, 2006; Bollerslev et al., 2013) follows
Merton (1980) in regressing returns on variances covering the same period, it is well
motivated based on asset pricing theory to consider conditional variance measures
that are in investor’s information set at the start of the period (see, e.g., Ghysels,
Santa-Clara, and Valkanov, 2005). Following this idea, we base our work on realized
variances calculated from high-frequency returns, and use these to construct proxies
for conditional variance using different versions of the heterogeneous auto-regressive
(HAR) model of Corsi (2009). In particular, there is a need to accommodate the
leverage effect, i.e., a possible negative contemporaneous relation between variance
3.1. I NTRODUCTION
63
and return. Following Black (1976) and Christie (1982), a drop in stock price increases
the debt-equity ratios and the expected risk. Empirical research on realized variances
documents strong leverage effects, see, e.g., Christensen and Nielsen (2007) and Yu
(2005). Thus, following Corsi and Renò (2012), we use a version of the HAR model
extended with lagged leverage effects, the LHAR. With this, we study the risk-return
relation at a daily horizon using band spectral regression, thus allowing the regression
coefficients to differ across the chosen frequency bands. The presence of the leverage
effect in the risk-return relation would render the classical band spectral regression
(Engle, 1974; Harvey, 1978) inconsistent. The problem is that the error term (the
expectation error in returns) is in the information set in subsequent conditional
variances. To achieve consistency, we use the one-sided filtering approach to band
spectral regression suggested by Ashley and Verbrugge (2008).
Our empirical study uses realized variances from high-frequency data for the S&P
500 from 1995 to 2012. We implement band spectral regression using the one-sided filter, and use the estimated regression to assess the presence of frequency dependence
in the risk-return relation by testing for equal parameters across frequency bands.
Indeed, we find that the linear relation (common parameters across frequency bands)
is rejected consistently across different model specifications. These results strongly
suggest that the relation between risk and return depends on the frequency (period
length) considered. We specifically find that conditional variance fluctuations with
periods of around one month and one week have significantly positive, respectively
negative, effects on expected returns, indicating that the risk compensation effect is
at work at the lower frequency. When the sample is split at the start of the financial
crisis in 2007, we find that the negative relation at the weekly frequency becomes
much stronger following the onset of the crisis and is not statistically significant
before the crisis.
To further assess the importance of the frequency dependence in the risk-return
relation, we compare forecasting performance with and without this extension. Because we use a one-sided filter, it is possible to obtain real-time forecasts of stock
returns from the band spectral regression on conditional variances. However, due
to estimation uncertainty, a good in-sample fit of multivariate regressions for stock
returns often fails to translate into accurate real-time forecasts. To mitigate the effect
of estimation uncertainty in the construction of our return forecasts, we combine the
Ashley and Verbrugge (2008) one-sided filter with the complete subset regression approach of Elliott et al. (2013). We show that allowing for frequency dependence in the
regression relation using this combined approach helps improve return forecasting
performance after July 2007.
While our results therefore strongly support frequency dependence in the riskreturn relation, and the significantly positive relation around the one-month period
is consistent with the need for risk compensation from asset pricing theory, the
significantly negative relation between return and conditional variance around the
64
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
one-week period calls for some attention. In the GARCH literature, the risk-return
relation is studied using GARCH-in-mean (or GARCH-M) models, following Engle,
Lilien, and Robins (1987). Studies in this literature frequently find a negative relation,
e.g., Nelson (1991) and Glosten, Jagannathan, and Runkle (1993). There are economic
forces that move risk and return in opposite directions over the same time period.
In case of the leverage effect, a negative realized return increases risk, thus leading
to a negative contemporaneous relation. Volatility feedback has a similar effect.
Following French, Schwert, and Stambaugh (1987) and Campbell and Hentschel
(1992), if the conditional variance increases and the risk-return relation is positive,
then discount rates increase, and therefore the stock price drops. If this effect plays
out instantaneously, then this again leads to a negative contemporaneous relation
of realized returns and risk. If this effect from conditional variance to price does not
transmit instantaneously, then it could lead to a negative relation at, for example, the
weekly period. Recent asset pricing research further examines the possibility that
stocks that move with variance pay off in bad states and therefore should command
lower risk premia. Ang, Hodrick, Xing, and Zhang (2006) document negative crosssectional premia based on this idea, i.e., a negative price of volatility risk.
Thus, leverage, feedback, or a negative volatility risk price may confound evidence on the risk compensation effect. There are only subtle measurable differences
between the confounding effects. Causality runs from return to risk in case of the
leverage effect, and from risk to return for the feedback and negative risk price effects.
Furthermore, both the leverage effect and the cross-sectional risk price effect are
likely to be strongest at the firm level, whereas the volatility feedback effect is operational at the market level. As a negative volatility feedback effect, firstly, is consistent
with a positive risk compensation effect, secondly, should be at work at the market
level, and thirdly, should lead to an effect of changes in conditional variance on subsequent returns, we may cautiously interpret our findings of a negative coefficient
at one week and a simultaneous positive coefficient at one month as evidence of
volatility feedback, respectively risk compensation. However, as a caveat, the effect
through discount rates should also have some long-lived features, in addition to the
immediate price drop following an increase in risk, so the interpretation remains
delicate. At any rate, whatever the reason for the negative risk-return relation around
the weekly frequency, our work establishes its empirical importance. The negative
relation at the weekly frequency co-exists with the positive risk-return tradeoff or risk
compensation effect at the monthly frequency, and accommodating both improves
the forecasting of future stock returns.
Our findings shed some light on the empirical risk-return relation, which has
been of interest in the finance literature for a long time (see, e.g., Merton, 1973,
1980). Advances in variance estimation have provided new and more precise ways of
estimating the risk-return relation. Using high-frequency volatility measurements to
study the risk-return relation at a daily horizon, Bali and Peng (2006) find a positive
3.2. T HE E MPIRICAL R ISK -R ETURN R ELATION
65
and significant relation that is robust to different model specifications and methods
of volatility measurement. Harrison and Zhang (1999) find that a positive risk-return
relation is only present for longer holding periods but not, for example, at the monthly
horizon. Ghysels et al. (2005) use a mixed-frequency approach to construct monthly
conditional variance forecasts and find a positive and significant risk-return relation.
Our findings confirm that the risk-return relation is positive on average, i.e., when we
do not allow for frequency dependence. When we allow for frequency dependence,
the relation is strongly negative at certain frequencies during the financial crisis. In
line with Rossi and Timmermann (2011) and Christensen et al. (2012), this suggests
that the risk-return relation is non-linear and that the estimated coefficient in a linear
regression model thus depends on sample period and sampling frequency.
The remainder of the chapter is organized as follows. In Section 3.2 we discuss the
specification of the risk-return relation, conditional variance modeling, and the data.
Section 3.3 presents the band spectral regression results for the risk-return relation.
Section 3.4 uses band spectral regression for real-time forecasting of stock market
returns. Concluding remarks are given in Section 3.5.
3.2 The Empirical Risk-Return Relation
Theory suggests that investors are compensated for taking on greater risks by higher
expected returns, such that the conditional mean is positively related to the conditional variance of returns. Let r t +1 denote the return on day t + 1. The time t
conditional expectation of the variance of r t +1 is denoted Et [σ2t +1 ]. The risk-return
relation is empirically often specified as the linear relation
r t +1 = µ + γEt [σ2t +1 ] + u t +1 ,
(3.1)
where u t +1 are zero-mean innovations. Equation (3.1) is in the tradition of the intertemporal capital asset pricing model (ICAPM, see Merton, 1973). Ghysels et al. (2005)
estimate a positive γ in model (3.1) with a mixed-frequency approach using monthly
returns while calculating conditional variances from daily returns. Bali and Peng
(2006) consider model (3.1) with daily returns and conditional variances measured
from intra-day data, and consistently find a positive risk-return relation across different volatility measurements and model specifications. Bollerslev and Zhou (2006)
also use high-frequency data, but specify a contemporaneous regression that is influenced by leverage and feedback effects. In a closely related paper, Corsi and Renò
(2012) use conditional variance constructed from heterogeneous autoregressive (HAR
) models in the linear risk-return model (3.1) and find a positive and significant γ for
daily returns.
A major challenge in estimating the risk compensation is to separate it from
leverage and volatility-feedback effects that can produce negative contemporaneous
correlation between risk and return (see, e.g., Black, 1976; Christie, 1982; Campbell
66
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
and Hentschel, 1992; Wu, 2001). Contemporaneous risk-return regressions will give a
biased estimate of the risk compensation coefficient because of this negative contemporaneous correlation from leverage and volatility-feedback. Using conditional
expectations of the variance that are only based on lagged information, as in model
(3.1), has the advantage that the estimate is not influenced by contemporaneous
leverage and volatility-feedback effects, see French et al. (1987). Lagged leverage
effects are, however, allowed in the conditional variance as we discuss in Section
3.2.2.
3.2.1 Data and Volatility Measurement
Previous literature, for example Bollerslev, Litvinova, and Tauchen (2006), has shown
that high-frequency data is crucial for precision in the estimation of the risk-return relation. We use intraday high-frequency returns on the SPDR S&P 500 exchange-traded
fund (ticker SPY) that tracks the S&P 500 index.1 Our sample runs from 1995/01/03
to 2012/07/31, with a total of 4426 trading days.
Daily realized variances are computed with the standard approach from intra-day
returns. The intraday realized variance on day t , RVt , is calculated by
RVt =
I
X
i =1
y i2 ,
(3.2)
where y i are the intra-day returns over some short time intervals. We choose 5-minute
intra-day returns, of which there are 79 for most trading days (I = 79). The 5-minute
sampled realized variances avoid market microstructure noise by sparse sampling,
but this discarding of data leads to an inefficient estimator if returns are observed
at a higher frequency than 5-minute intervals. We also consider realized kernels
(RK) as an alternative, more efficient, estimator of the variance that remains robust
to market microstruture effects (see Hansen and Lunde, 2006; Barndorff-Nielsen,
Hansen, Lunde, and Shephard, 2008). Instead of sampling a fixed time interval, we
use all available returns, after appropriate data cleaning (see Barndorff-Nielsen,
Hansen, Lunde, and Shephard, 2009). Let z i be the i t h intra-day return, of which we
assume there are J . The time interval for these intra-day returns is not fixed, as price
observations are not equally spaced. The realized kernels are computed, as described
in Barndorff-Nielsen et al. (2009), by
!
h −1
RK t =
k
(ρ h + ρ −h )
H
h=1
H
X
Ã
ρh =
I
X
z j z j −|h| ,
j =|h|+1
1 We are grateful to Asger Lunde for providing us with the data used in this chapter.
(3.3)
(3.4)
3.2. T HE E MPIRICAL R ISK -R ETURN R ELATION
67
−0.10
0.00
0.05
Returns
1995
2000
2005
2010
2005
2010
2005
2010
0 10
30
50
RV
1995
2000
−12
−10
−8
−6
log RV
1995
2000
Figure 3.1. Time series plots of daily returns on S&P 500, intraday realized variance (RV), and
log realized variance (log RV). Sample period is 1995/01/03 to 2012/07/31.
where k(.) is a kernel function and H the bandwidth parameter. The Parzen kernel,
given by
k(x) =


1 − 6x 2 + 6x 3


2(1 − x)



0
3
for 0 ≤ x < 1/2,
for 1/2 ≤ x ≤ 1,
(3.5)
for x > 1,
is used as kernel function for the construction of RK t .
Figure 3.1 shows the time series of daily returns, realized variances, and natural
logarithm of realized variances over the full sample period. The realized variances
series shows very pronounced spikes. After the log transformation, the series is much
more stable and no outliers are apparent. Modeling realized variances after the log
transformation has been advocated by Andersen, Bollerslev, Diebold, and Labys
(2003) and has since become the standard approach.
68
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
3.2.2 Modeling the Conditional Variance
Before we can estimate the risk-return regression (3.1), we have to construct a proxy
for conditional variance Et [σ2t +1 ]. We consider different model specifications for the
conditional variance. As a starting point we use the contemporaneous realized variance spanning the same period as the left-hand side returns in Equation (3.1). The
problem with using contemporaneous realized variance is that γ cannot be interpreted as the risk-return trade-off parameters, because contemporaneous leverage
and volatility-feedback effects will be captured by the estimate. Both effects are likely
to induce a negative relation between contemporaneous returns and changes in
variance. This problem can be mitigated by using lagged realized variance as a proxy.
Using lagged realized variance amounts to a random walk model for realized
variance. While not as persistent as a random walk, realized variance series are still
highly persistent. The heterogeneous autoregressive (HAR) model of Corsi (2009)
provides a parsimonious approximation of the variance dynamics that works well in
forecasting, and thus provides good conditional variance proxies. This conditional
variance proxy has been used by Corsi and Renò (2012) to estimate the risk-return
relation for daily returns and intra-day realized variances. They use the standard
HAR model, a HAR model with jumps, and a HAR model with jumps and lagged
leverage effects to obtain the conditional variance forecasts. The risk-return relation
is found to be positive and statistically significant for all three conditional variance
proxies. While Corsi and Renò (2012) find that incorporating jumps has almost no
effect on the estimate, they find that allowing for lagged leverage effects matters for
the risk-return relation and makes it more significant.
Let r v t = ln(RVt ) denote the natural logarithm of the realized variance. The HAR
model is given by
r v t = c + φ1 r v t −1 + φ2 r v 5,t −1 + φ3 r v 22,t −1 + ²t ,
(3.6)
P
where r v l ,t −1 = 1l lj =1 r v t − j is the average of r v t over the past l days up to day
t − 1. Thus, realized variance is predicted by simple moving averages of past realized
variances. The HAR model is remarkably successful in capturing the time series
properties of realized variance in a parsimonious fashion. The inclusion of lags
5 and 22 corresponds to averages over one trading week and one trading month,
respectively. With estimated coefficients (ĉ,φ̂1 ,φ̂2 ,φ̂3 ) the variance forecast is
rc
v t +1 = ĉ + φ̂1 r v t + φ̂2 r v 5,t + φ̂3 r v 22,t ,
(3.7)
which is known at time t .
The simple structure of the HAR model allows for additional regressors. We add
lagged returns, and absolute values of lagged returns, to account for lagged leverage
effects in the manner of Corsi and Renò (2012). The HAR model with leverage (LHAR)
is given by
r v t = c + φ1 r v t −1 + φ2 r v 5,t −1 + φ3 r v 22,t −1 + λ1 r t −1 + λ2 |r t −1 | + ²t ,
(3.8)
3.2. T HE E MPIRICAL R ISK -R ETURN R ELATION
69
from which we construct the variance forecasts as
rc
v t +1 = ĉ + φ̂1 r v t + φ̂2 r v 5,t + φ̂3 r v 22,t + λ̂1 r t + λ̂2 |r t |.
(3.9)
Table 3.1 shows the in-sample estimation results of the HAR and LHAR models.
Additionally, we include a model that only includes one lagged variance. Allowing for
leverage does not substantially increase the adjusted R 2 , but both lagged returns and
lagged absolute returns are significant, showing that leverage effects are important.
Consistent with the leverage argument, we find that the coefficient on lagged absolute
returns have the opposite sign than the coefficient on lagged returns. This implies
that negative returns have a positive effect while positive returns has a much smaller
effect on variance. Indeed, the hypothesis H0 : λ1 = −λ2 is not rejected, leading us to
the conclusion that only negative returns matter for the conditional variance. The
results for using realized kernel are very similar to the results for 5-minute realized
variance.
To construct the conditional variances, we do not use the full sample parameter
estimates for the HAR and LHAR models, but recursively estimate the coefficients
only using information prior the period for which the conditional variance is constructed. The parameters are estimated using a rolling estimation window of length
440, roughly one tenth of the sample. We have specified the variance models in terms
of the logarithm of realized variance. The risk-return trade-off is, however, specified
for the untransformed conditional variance. Thus, we transform the forecast back by
r v t +1 + 0.5σ̂2² ) for the analysis of the risk-return relation, where σ̂2² is an
σ̂2t +1 = exp(c
estimate of the error variance in the model of r v t +1 . For realized kernels we obtain
forecasts from the HAR and LHAR models in the exact same way as we have just
described for the 5-minute realized variances.
3.2.3 Linear Risk-Return Regression
In the linear model returns are regressed on conditional variances, i.e.,
r t +1 = µ + γb
σ2t +1 + u t +1 .
(3.10)
Let T be the sample size. It is convenient for the further presentation to write the
regression relation in vector notation as
R = Z µ + Vb γ +U ,
(3.11)
where R = (r 1 , . . . ,r T )0 , Vb = (b
σ21 , . . . ,b
σ2T )0 , and U = (u 1 , . . . ,u T )0 . Here Z is a T × 1 vector
of ones. In general it is also possible to include regressors the are not frequencydependent.
Table 3.2 shows the estimation results for regression model (3.11) using different conditional variance models. The coefficients are estimated by least-squares
with asymptotic standard errors based on the Newey and West (1987) covariance
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
70
Lag RV
−1.533∗∗∗
(0.098)
0.839∗∗∗
(0.010)
0.753
4404
HAR
−0.417∗∗∗
(0.088)
0.407∗∗∗
(0.021)
0.364∗∗∗
(0.033)
0.185∗∗∗
(0.026)
LHAR
−0.875∗∗∗
(0.096)
0.323∗∗∗
(0.020)
0.394∗∗∗
(0.030)
0.198∗∗∗
(0.025)
−9.931∗∗∗
(0.734)
9.498∗∗∗
(1.109)
0.754
4404
0.734
Realized Variance
0.704
4425
Lag RK
−1.671∗∗∗
(0.094)
0.827∗∗∗
(0.010)
0.743
4404
HAR
−0.441∗∗∗
(0.091)
0.401∗∗∗
(0.024)
0.367∗∗∗
(0.034)
0.186∗∗∗
(0.026)
Realized Kernel
0.684
4425
LHAR
−0.942∗∗∗
(0.101)
0.317∗∗∗
(0.024)
0.396∗∗∗
(0.033)
0.197∗∗∗
(0.025)
−9.675∗∗∗
(0.725)
10.147∗∗∗
(1.094)
0.744
4404
0.6975
Table 3.1. Estimated models for log realized variance and log realized kernel for full sample.
c
φ1
φ2
φ3
λ1
λ2
Adj. R2
Num. obs.
H0 : λ1 = −λ2
Note: Significance levels with Newey-West standard errors: ***: 1%, **: 5%, and *: 10%. Last row shows p-values for testing the hypothesis H0 : λ1 = −λ2 .
RV
0.000
(0.000)
−0.182
(0.763)
0.001%
3985
Lag RV
−0.001∗
(0.000)
2.708∗∗∗
(0.762)
0.316%
3985
HAR
−0.001∗
(0.000)
2.893∗∗
(0.991)
0.214%
3985
LHAR
−0.001∗∗∗
(0.000)
4.024∗∗∗
(0.784)
0.656%
3985
RK
0.000
(0.000)
−0.727
(1.427)
0.020%
3985
Note: Significance levels with Newey-West standard errors: ***: 1%, **: 5%, and *: 10%.
R2
Num. obs.
γ
µ
Realized Variance
Lag RK
−0.001∗∗
(0.000)
3.014∗∗∗
(0.591)
0.349%
3985
HAR
−0.001∗∗∗
(0.000)
3.318∗∗∗
(0.862)
0.269%
3985
Realized Kernel
LHAR
−0.001∗∗∗
(0.000)
4.999∗∗∗
(1.919)
0.854%
3985
Table 3.2. Estimation results for linear risk-return regression. Lag RV and lag RK use the first lag of the realized variance and realized kernel, respectively.
For HAR and LHAR the conditional variances are obtained from recursive estimation with rolling window of length M = 440.
3.2. T HE E MPIRICAL R ISK -R ETURN R ELATION
71
72
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
estimator with lag length 44, corresponding to roughly two months of trading days.
For contemporaneous variance (RVt and RK t ) we get a negative but insignificant
coefficient γ. When we use lagged realized variance, the coefficient becomes positive
and statistically significant. For conditional variances from both HAR and LHAR the
coefficient γ increases further and remains statistically significant. The explanatory
power, as measured by R 2 , is highest for the LHAR model. The estimates are close to
the results in Corsi and Renò (2012), who analyze a different, yet overlapping, sample
period from 1982 to 2009 for the S&P 500.
The standard errors in Table 3.2 do not account for the additional uncertainty
from the use of generated regressors, i.e., the fact that the conditional variances are
obtained from estimated HAR models (see, e.g., Pagan (1984) and Pagan and Ullah
(1988)). Murphy and Topel (2002) suggest a covariance estimator that corrects for
generated regressors. When the regressors are generated by a linear model and errors
are homoskedastic, the standard errors are inflated by a factor 1 + γ2 var(u t )/ var(²t ).
Using an AR model selected by AIC on the original (without log transformation)
realized variance series, we get an estimate var(u t )/ var(²t ) = 1.58 × 10−4 , such that
the correction factor is negligible for all reasonable values of γ. For example, for γ = 5
the correction factor is 1.0048. Even though our data is clearly heteroskedastic, the
minuscule correction factors indicate that correcting for generated regressors is not
important for our data. The error variance in the conditional variance models is much
smaller than in the second step, i.e., the risk-return regression. This is in line with
French et al. (1987), who also find that adjusting the standard errors in model (3.10)
for generated regressors leads to negligible adjustments.
3.3 Frequency Dependence in the Risk-Return Relation
To allow for frequency dependence in the risk-return relation we apply the band
spectral regression (see Engle (1974) and Harvey (1978)). Bands spectral regression
has been applied to detect frequency dependence in macroeconomic models by Tan
and Ashley (1999) and Ashley and Verbrugge (2008).
For the real-valued band spectral regression of Harvey (1978) we define the T × T
discrete Fourier transform matrix A T with elements



T −1/2 ,
for i = 1;









³ ´−1/2
³
´


πi ( j −1)
2


cos
,
for i = 2,4,6, . . . ,(T − 2) or (T − 1);

T
T

ai j =
(3.12)

³ ´−1/2
³
´


π(i −1)( j −1)
 2

sin
, for i = 3,5,7, . . . ,(T − 1) or (T );

T
T










T −1/2 (−1) j +1 ,
for j = T if T is even.
3.3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
73
Pre-multiplying the regression model (3.11) by A T we get
R ∗ = Z ∗ µ + V ∗ γ +U ∗ ,
(3.13)
where R ∗ = A T R, Z ∗ = A T Z , V ∗ = A T Vb , and U ∗ = A T U . The elements of R ∗ and
V ∗ correspond to different frequency components of returns and variances, respectively. The first element corresponds to frequency 0 and the last element, element
T , corresponds to frequency π. In general, the element i corresponds to frequency
π(bi /2c/T ), where b.c rounds to the next lower integer. The constant µ and the regression coefficient γ in model (3.13) remain unaffected by this transformation.
If there is no frequency dependence in the risk-return relation, such that the
linear model is correctly specified, then the coefficient γ does not depend on which
frequencies we include in the regression. To detect frequency dependence for the
conditional variances V , we allow the coefficients to vary for different frequency
bands. Let B be the chosen number of frequency bands. Define the T × 1 vectors
D b∗ , b = 1, . . . ,B , as the dummy variables with the observations of V ∗ belonging to
frequency band b, and zeros for all other frequencies. The frequency-dependent
model in the frequency domain then becomes
R ∗ = Z ∗µ +
B
X
b=1
D b∗ βb +U ∗ ,
(3.14)
where βb are the coefficients that can be different for the frequency bands. By premultiplying by A 0T we can transform the regression (3.14) back to the time domain,
R = Zµ+
B
X
D b βb +U ,
(3.15)
b=1
where D b = A 0T D b∗ is now a time series of the frequency component corresponding to
frequency band b (see Ashley and Verbrugge, 2008, for details). The null hypothesis
of no frequency dependence corresponds to testing
H 0 : β1 = β2 = · · · = βB
(3.16)
in time domain regression (3.15). The important difference of the band spectral
approach to other frequency domain approaches is that we keep all frequencies and
do not focus exclusively on certain frequencies. This allows us to get a complete
picture of the frequency dependence in the risk-return relation.
The standard band spectral regression is based on a two-sided filter, i.e., the
time domain dummies D b in (3.15) are constructed from past and future observations. Contemporaneous correlation and feedback effects play an important role
in the risk-return regression, as we have seen in the regression results in Table 3.2.
Parameter estimates based on a two-sided filter will therefore be influenced by the
contemporaneous correlation and likely be downwards biased. We therefore adapt
74
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
the one-sided filtering approach to the band spectral regression proposed by Ashley
and Verbrugge (2008). In this approach we extract the frequency components by
recursively applying a one-sided filter with a moving window of size w, such that
for every time t we construct the frequency bands D b,t , for b = 1, . . . ,B using only
information up to, and including, time period t . Because we only apply the filter on
w observation, the number of frequencies is reduced compared to two-sided band
spectral regression. To avoid endpoint problems associated with one-sided filtering
we use the approach of Stock and Watson (1999), in which we use an auto-regressive
(AR) model to forward pad (in the terminology of Stock and Watson (1999)) in each
step of the recursive filtering. We choose a moving window of w = 600 observations
and pad 300 observations forward.
The number of bands B in the band spectral regression can be selected by information criteria. Table 3.3 shows AIC and BIC for 1 to 16 bands for the different
regressors in the band spectral regression. The number of bands selected by BIC is
very unstable across regressors, ranging from 1 band for lagged RV to 12 bands for
LHAR. Using AIC we select 14 bands for RV and 12 bands for the three other regressors.
This leads to the use of 12 bands (B = 12) in the following analysis, because we are
most interested in the results for lagged RV, HAR, and LHAR.
In Table 3.4 we show the periods that correspond to the 12 frequency bands. These
values are a function of our choice of number of bands B and length of window w for
the one-sided filter. The lowest frequency band contains all fluctuations with periods
higher than roughly 2 months. We will not be able to analyze frequency dependence
at lower frequencies such as business cycle frequencies. Thus, the following analysis
will detect frequency dependence at higher frequencies, such as weekly and monthly
periods.
Figure 3.2 shows four of the extracted frequency band components D b in the
time domain. The frequency components are for 12 bands and use the LHAR conditional variances estimated with realized variances. The lowest frequency component
captures the persistent component of realized variance. All frequency components
capture some of the erratic behavior of variance at the beginning of the financial
crisis around July 2007.
In contrast to the standard band spectral regression, the one-sided filtering approach does not guarantee orthogonal frequency components. Table 3.5 shows the
correlation between the extracted components in the time domain. Most entries in
the correlation matrix are low, but for neighboring frequency bands the correlation
can be substantial. The correlation of neighboring frequency bands is particularly
strong for the high frequencies. For example, the correlation of the two highest
frequency components is 0.48. The correlations between high and low frequency
components are close to zero. This means that the association of the components D b
with a certain frequency is very rough, in particular at the high frequencies.
Figure 3.3 shows the coefficients of the band spectral regression (3.15) with 12
B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Regressor:
AIC
−20749.14
−20792.95
−20780.87
−20805.32
−20797.88
−20798.36
−20789.51
−20805.74
−20796.69
−20816.81
−20791.45
−20816.71
−20810.84
−20833.40
−20830.95
−20823.09
RV
BIC
−20730.76
−20768.44
−20750.23
−20768.55
−20754.99
−20749.33
−20734.36
−20744.46
−20729.28
−20743.27
−20711.79
−20730.92
−20718.92
−20735.35
−20726.78
−20712.79
AIC
−20759.45
−20765.19
−20767.49
−20773.02
−20767.34
−20767.68
−20774.92
−20769.84
−20789.25
−20761.30
−20770.76
−20807.20
−20795.25
−20781.53
−20793.04
−20784.82
BIC
−20741.07
−20740.68
−20736.86
−20736.25
−20724.45
−20718.66
−20719.77
−20708.57
−20721.85
−20687.77
−20691.10
−20721.41
−20703.34
−20683.49
−20688.87
−20674.52
LRV
AIC
−20756.19
−20754.95
−20760.45
−20770.81
−20763.98
−20760.22
−20767.87
−20766.18
−20776.97
−20762.08
−20772.98
−20812.34
−20796.03
−20787.39
−20787.44
−20785.16
BIC
−20737.8
−20730.44
−20729.81
−20734.05
−20721.09
−20711.20
−20712.72
−20704.90
−20709.57
−20688.55
−20693.32
−20726.56
−20704.12
−20689.36
−20683.27
−20674.86
HAR
AIC
−20760.05
−20767.01
−20774.46
−20828.74
−20802.66
−20825.70
−20828.40
−20880.98
−20862.95
−20866.68
−20884.46
−20932.66
−20884.67
−20893.36
−20899.70
−20906.75
BIC
−20741.67
−20742.00
−20743.83
−20791.98
−20759.77
−20776.68
−20773.25
−20819.71
−20795.55
−20793.16
−20804.80
−20846.88
−20792.75
−20795.33
−20795.53
−20796.46
LHAR
Table 3.3. Information criteria for different numbers of frequency bands B and conditional variance proxies as regressors. Conditional variances
constructed using realized variance. The lowest values of AIC and BIC are shown in bold.
3.3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
75
76
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
0.0000
0.0015
D1
2000
2005
2010
−6e−04
0e+00
4e−04
D2
2000
2005
2010
−2e−04
1e−04
D11
2000
2005
2010
−6e−04
0e+00
6e−04
D12
2000
2005
2010
Figure 3.2. Time series of frequency components D b in time domain from band spectral
regression with 12 frequency bands obtained from one-sided filter applied to LHAR with
realized variances. From top to bottom the plot shows the lowest, second lowest, second
highest, and highest frequency band.
3.3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
77
Table 3.4. Periods (in trading days) included in each of the 12 frequency bands.
Band
1
2
3
4
5
6
7
8
9
10
11
12
Highest
∞
47.37
23.68
15.79
11.84
9.47
7.89
6.79
5.96
5.31
4.79
4.36
Lowest
48.65
24.00
15.93
11.92
9.52
7.93
6.82
5.98
5.33
4.80
4.37
2.00
Table 3.5. Correlations for time series of frequency components from one-sided filter applied
to LHAR with realized variance. The frequency components are ordered from lowest to highest
frequency, i.e., D 1 corresponds to the band with the lowest, and D 12 to the band with the
highest frequencies. Correlations are calculated from 3386 observations.
D1
D2
D3
D4
D5
D6
D7
D8
D 9 D 10 D 11 D 12
D 1 1.00 0.13 0.06 0.05 0.02 0.01 -0.02 0.00 0.02 -0.00 -0.00 0.01
D 2 0.13 1.00 -0.03 -0.03 -0.02 -0.01 0.01 0.01 0.01 -0.01 -0.00 0.01
D 3 0.06 -0.03 1.00 0.04 0.02 0.03 0.03 0.04 0.04 0.02 0.03 0.03
D 4 0.05 -0.03 0.04 1.00 0.04 0.03 0.02 0.08 0.09 0.06 0.09 0.09
D 5 0.02 -0.02 0.02 0.04 1.00 0.14 0.11 0.15 0.11 0.05 0.07 0.08
D 6 0.01 -0.01 0.03 0.03 0.14 1.00 0.16 0.17 0.11 0.03 0.06 0.07
D 7 -0.02 0.01 0.03 0.02 0.11 0.16 1.00 0.28 0.09 0.03 0.09 0.07
D 8 0.00 0.01 0.04 0.08 0.15 0.17 0.28 1.00 -0.06 -0.06 -0.01 -0.00
D 9 0.02 0.01 0.04 0.09 0.11 0.11 0.09 -0.06 1.00 0.06 0.14 0.11
D 10 -0.00 -0.01 0.02 0.06 0.05 0.03 0.03 -0.06 0.06 1.00 0.40 0.31
D 11 -0.00 -0.00 0.03 0.09 0.07 0.06 0.09 -0.01 0.14 0.40 1.00 0.38
D 12 0.01 0.01 0.03 0.09 0.08 0.07 0.07 -0.00 0.11 0.31 0.38 1.00
bands for the different regressors. For RV, where contemporaneous leverage and
volatility feedback effects affect the coefficients, the coefficients become negative for
periods lower than 8 days. For the other three regressor,s the majority of coefficients
are positive. Coefficients for lagged RV and HAR are very similar, with a pronounced
negative coefficient at frequency band 10. For LHAR conditional variance we see
the strongest negative coefficients around the weekly period. The results for realized
kernels in Figure 3.4 show no qualitative differences.
Based on the band spectral regression we test for frequency dependence by testing
78
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
(b) Lagged RV
5.62
5.04
4.57
4.18
5.04
4.57
4.18
6.36
7.32
8.65
5.62
Period (days)
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−80
−50
−60
−40
0
−20
0
50
20
(a) RV
Period (days)
(d) LHAR
Period (days)
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−150
−150
−100
−50
−100
0
−50
50
0
100
50
150
(c) HAR
Period (days)
Figure 3.3. Coefficients from band spectral regression with 12 bands. Regressors are RK, lagged
RK, HAR, and LHAR. Red dotted lines show 95% confidence intervals.
for equal parameters. The test is implemented as a robust Wald test with NeweyWest covariance matrix. The coefficient β1 associated with the lowest frequency
component is excluded from the test, such that we test H02 : β2 = β3 = · · · = β12 . We
exclude β1 for robustness, because the lowest frequency component is very persistent
(see Figure 3.2). Clearlyl, rejection of H02 implies rejection of the more restrictive null
hypothesis that all frequency bands have equal coefficients. However, by testing H02
instead of H0 , we sacrifice power.
Table 3.6 shows adjusted R 2 for the band spectral regressions, Wald test statistics,
3.3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
(b) Lagged RK
5.62
5.04
4.57
4.18
5.04
4.57
4.18
6.36
7.32
8.65
5.62
Period (days)
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−100
−50
0
−50
50
0
100
(a) RK
79
Period (days)
(d) LHAR
Period (days)
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−150
−150
−100
−50
−100
0
−50
50
0
100
150
50
(c) HAR
Period (days)
Figure 3.4. Coefficients from band spectral regression with 12 bands for realized kernels.
Regressors are RK, lagged RK, HAR, and LHAR. Red dotted lines show 95% confidence intervals.
and associated p-values. The null hypothesis of equal parameters (H02 ) is strongly
rejected for all regressors. Thus, the linear regression model is not a good approximation for the risk-return relation over the full sample. Adjusted R 2 is highest for
LHAR compared to the other regressors. This is in line with Corsi and Renò (2012),
who also find that lagged leverage effects are very important in the construction of
the conditional variance when used in the risk-return relation.
80
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
Table 3.6. Test for no frequency dependence. The table shows adjusted R 2 from band spectral
regression with one-sided filter, and the Wald statistic W , and p-value from test for equal
parameters across frequency bands 2 to 12, H02 : β2 = β3 = · · · = β12 . The test is based on
Newey-West covariance matrix with 44 lags. Sample size is 3386.
Realized Variance
RV/RK
Lag RV/RK
HAR
LHAR
adj. R 2
1.913%
1.794%
2.147%
5.563%
W
4.101
3.174
4.725
19.759
p-value
0.000
0.000
0.000
0.000
Realized Kernel
adj. R 2
2.993%
2.383%
3.550%
5.079%
W
9.947
4.871
6.894
7.405
p-value
0.000
0.000
0.000
0.000
3.3.1 Frequency Dependence during the Financial Crisis
Stock markets have experienced a sharp rise in variance after the beginning of the
financial crisis in mid-2007. The effects are very pronounced in the frequency components of the realized variance series in Figure 3.2. Both low and high frequency
components show more erratic behavior after the start of the financial crisis. To analyze the frequency dependence during the period with increased variance we split the
sample into the two subsamples 1995/01/03–2007/07/31 and 2007/08/01–2012/07/3,
such that the first subsample stops before the financial crisis starts. The subsamples
are labeled pre-crisis and crisis, respectively.
Table 3.7 shows estimation results for the linear risk-return model on the two
subsamples. With contemporaneous variance the estimated coefficient is positive
in the first, and negative in the second subsample. This suggests that leverage and
feedback effects are more important during the crisis. For lagged RV, HAR, and LHAR,
the coefficient estimate is always positive, but much higher for the pre-crisis sample.
During the crisis the risk-return relation is still positive and statistically significant
for lagged RV, HAR, and LHAR.
The coefficient estimates of the band spectral regression for HAR and LHAR
variances for the pre-crisis and crisis sample are shown in Figure 3.5. In the pre-crisis
sample there is no statistically significant evidence of negative dependence at any
frequency. In the crisis sample, however, we get strong negative dependence around
the one week period. Two frequency bands for HAR and three frequency bands for
LHAR have statistically significant negative coefficients.
Tests for no frequency dependence are shown in Table 3.8. The null is rejected
for all regressors in both subsamples. The test statistic is, however, much larger in
the crisis than pre-crisis, i.e., the evidence against the null hypothesis is stronger in
the second subsample. The band spectral regression has a higher adjusted R 2 for the
crisis sample than for the pre-crisis sample for all regressors.
These findings suggest that during the financial crisis and its aftermath the short
lived volatility fluctuation, with period of around one week and below, had a distinctly
RK
0.000
(0.000)
1.390
(2.793)
−0.004%
2283
RV
0.000
(0.000)
0.793
(2.589)
−0.002%
2283
HAR
−0.001∗∗∗
(0.000)
6.163∗∗
(2.828)
0.317%
2283
Lag RK
0.000∗
(0.000)
4.453∗∗
(1.754)
0.296%
2283
HAR
−0.001∗∗∗
(0.000)
7.086∗∗
(3.231)
0.333%
2283
1995/01/03–2007/07/31
Lag RV
0.000∗
(0.000)
4.410∗∗∗
(1.333)
0.380%
2283
LHAR
−0.001∗∗∗
(0.000)
12.404∗∗∗
(3.765)
2.120%
2283
LHAR
−0.001∗∗∗
(0.000)
10.586∗∗∗
(3.297)
1.722%
2283
RK
0.000
(0.000)
−1.574
(1.592)
0.105%
1418
RV
0.000
(0.000)
−0.712
(1.486)
-0.019%
1418
HAR
0.000
(0.000)
1.860∗∗∗
(0.141)
0.104%
1418
LHAR
0.000
(0.000)
2.007∗∗∗
(0.325)
0.230%
1418
Lag RK
0.000∗∗∗
(0.000)
2.382∗∗∗
(0.517)
0.309%
1418
HAR
0.000
(0.000)
2.378∗∗∗
(0.233)
0.205%
1418
LHAR
0.000
(0.000)
2.538∗∗∗
(0.421)
0.321%
1418
2007/08/01–2012/07/31
Lag RV
0.000∗∗∗
(0.000)
1.890∗∗∗
(0.489)
0.192%
1418
2007/08/01–2012/07/31
Note: Significance levels with Newey-West standard errors: ***: 1%, **: 5%, and *: 10%.
Adj. R2
Num. obs.
γ
µ
(b) RK
Adj. R2
Num. obs.
γ
µ
1995/01/03–2007/07/31
Table 3.7. Subsample estimation results for linear risk-return regression.
3.3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
81
82
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
(b) LHAR Pre-Crisis
5.62
5.04
4.57
4.18
5.04
4.57
4.18
6.36
7.32
8.65
5.62
Period (days)
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−100
−200
−50
−100
0
0
50
100
100
200
(a) HAR Pre-Crisis
Period (days)
(d) LHAR Crisis
Period (days)
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
4.18
4.57
5.04
5.62
6.36
7.32
8.65
10.59
13.64
19.15
100
32.14
−200
−150
−150
−100
−50
−100
0
−50
50
0
100
50
150
(c) HAR Crisis
Period (days)
Figure 3.5. Coefficients from band spectral regression with 12 bands for the two subsamples
1995/01/03–2007/07/31 (pre-crisis) and 2007/08/01–2012/07/31 (crisis). Regressor is HAR or
LHAR conditional variance from realized variances. Red dotted lines show 95% confidence
intervals.
negative effect on returns. The linear regression model fails to capture this feature of
the risk-return relation that is clearly present after July 2007.
3.4. F REQUENCY-D EPENDENT R EAL -T IME F ORECASTS
83
Table 3.8. Test for no frequency dependence for subsamples. The table shows adjusted R 2
from band spectral regression with one-sided filter, and the Wald statistic W , and p-value from
test for equal parameters across frequency bands 2 to 12, H02 : β2 = β3 = · · · = β12 . The test is
based on Newey-West covariance matrix with 44 lags.
Sample:
RV
Lag RV
HAR
LHAR
RK
Lag RK
HAR
LHAR
1995/01/03–2007/07/31
adj. R 2
1.367%
1.445%
1.625%
1.326%
1.172%
1.255%
1.596%
1.261%
W
6.486
5.607
3.636
2.839
6.278
5.095
3.701
2.671
p-value
0.000
0.000
0.000
0.001
0.000
0.001
0.000
0.002
2007/08/01–2012/07/31
adj. R 2
3.090%
4.084%
4.515%
11.978%
5.982%
5.943%
8.143%
11.278%
W
9.044
4.488
7.364
52.687
18.878
5.146
10.484
18.450
p-value
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
3.4 Frequency-Dependent Real-Time Forecasts
We have documented strong in-sample evidence of frequency dependence in the
risk-return relation. In this section we make an out-of-sample forecasting experiment
to investigate whether real-time forecast of returns can be improved by allowing for
frequency dependence. The one-sided filter described in Section 3.3 makes it possible
to extract the frequency components in real-time and thus allows us to construct the
forecasts from the band spectral regression.
As a benchmark, we obtain conditional mean forecasts from the linear regression
model by
rˆt +1 = µ̂ + γ̂σ̂2t +1 ,
(3.17)
where the parameter estimates and the conditional variance σ̂2t +1 are based on data
up to time t . A rolling window of length R is used for parameter estimation. Forecasts
from the band spectral regression with B bands are calculated as
rˆtF+1 = µ̂ +
B
X
β̂b D b,t ,
(3.18)
b=1
where D b,t , the B frequency components of σ̂t +1 , are constructed using the one-sided
filter based on observations up to time t as described in Section 3.3.
The forecasts from the band spectral regression in Equation (3.18) are based
on B + 1 estimated parameters. Regression models with multiple regressors do not
work well for return prediction, even with a modest number of regressors (see, e.g.,
Welch and Goyal, 2008). Rapach et al. (2010) have shown that forecast combination
can be used to improve forecasting accuracy compared to a multivariate regression
approach for stock returns.
84
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
3.4.1 Complete Subset Regression
Elliott et al. (2013) propose the complete subset regression (CSR) as a flexible forecast combination approach. Complete subset regression performs equal-weighted
forecast combination over all possible regression models that contain k out of the B
available regressors. Varying k allows the econometrician to change properties of the
forecast by trading off bias and variance. For small k the coefficients contain a strong
omitted variable bias, while for large k the predictor will suffer from high estimation
variance.
The parameter estimates from CSR are computed as
β̂k,B =
1
nX
k,B
n k,B
i =1
β̂Si ,
(3.19)
where n k,B = B !/((B − k)!k!). The parameter vectors β̂Si (i = 1 . . . ,n k,B ) contain all
possible parameter estimates based on k of the B variables, i.e., for each i a different
combination of k variables is included in the model. For the variables that are not
included in model i , the entries in β̂Si are zero. The coefficients for CSR are obtained
as the average over all n k,B possible combinations of k out of B variables. The total
number of models, n k,B , can be quite large. In this application we have B is 12,
because each frequency component is a potential predictor. The number of models
is highest for k = 6, which results in n 6,12 = 924 models.
The forecasting performance of the CSR depends crucially on the choice of k. For
k = B , the complete subset regression is equivalent to multiple regression estimated
by least squares including all variables at once. For k = 1, CSR corresponds to forecast
combination of univariate regression for each predictor variable, equivalent to the
approach of Rapach et al. (2010).
We follow Elliott et al. (2013) in choosing k by minimizing an estimate of the
asymptotic mean squared error (AMSE). The AMSE can be derived under an IID
assumption and the local model β = T −1/2 bσu , where σu is the standard deviation
of innovations. The parameter b controls the strength of the predictor and thus
determines which choice of k is optimal. Let Σ X be the covariance matrix of the
predictor variables. Elliott et al. (2013) show (Theorem 2) that the AMSE, scaled by
σ−2
u , can be expressed as
σ−2
u M SE (k) ≈
B
X
η j + b 0 (Λk,B − I B )0 Σ X (Λk,B − I K )b
(3.20)
j =1
where η j is the j th eigenvalue of Λ0B,B Σ X Λk,B Σ−1
X , and
Λk,B =
1
nX
k,B
n k,B
i =1
(S i0 Σ X S i )−1 (S i0 Σ X ),
(3.21)
where S i is selection matrix for all combinations, with ones on the diagonal for
included variables and zeros everywhere else. The AMSE, as a function of k, still
depends on the parameter b.
3.4. F REQUENCY-D EPENDENT R EAL -T IME F ORECASTS
85
AMSE
0.6
1.0
(a) AMSE for HAR Realized Variance
●
0.2
●
●
0
1
2
3
k
4
5
6
5
6
5
6
5
6
AMSE
0.6
1.0
(b) AMSE for LHAR Realized Variance
●
0.2
●
●
0
1
2
3
k
4
AMSE
0.4 0.6
0.8
(c) AMSE for HAR Realized Kernel
●
0.2
●
●
0
1
2
3
k
4
AMSE
0.6 1.0
1.4
(d) AMSE for LHAR Realized Kernel
●
0.2
●
●
0
1
2
3
k
4
Figure 3.6. Asymptotic mean squared error (AMSE) curves for the complete subset regression
with B = 12 based on the first 440 observations. The curves correspond to b 0 b = 1,2,3, in this
order from lowest to highest. In each plot the lowest asymptotic MSE for each γ is marked with
a circle.
86
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
Figure 3.6 plots the AMSE as a function of k for different values of b. The first
R = 440 observations are used to estimate Σ X and σu in (3.20), i.e., we base the
selection of k on first estimation window and not on observations in the out-ofsample period. For the different b, the optimal k varies from k = 0 to k = 4 for all
regressors, with the optima most frequently being at k = 1, k = 2, and k = 3. The
curves are all very flat around their minima, such that the forecasting performance
should not be very sensitive to the choice of k. This leads us to consider k = 1, k = 2,
and k = 3, for the forecasting.
Besides providing guidance for choosing k, some further insights into the expected forecasting performance can be gained from Figure 3.6. For k larger than 4
the MSE monotonically increases in all cases. Using the unrestricted band spectral regression corresponds to k = 12, which is far away from the optimum for all regressors.
Therefore, we expected the MSE for the unrestricted band spectral regression to be
much higher than for smaller k.
3.4.2 Results
We evaluate the out-of-sample performance by root mean squared error (RMSE), outof-sample R 2 , and model confidence set p-values. The out-of-sample R 2 (OOS-R 2 )
for model i is given by
PP
(rˆt ,i − r t )2
2
OOS-R i = 1 − PtP=1
,
(3.22)
2
t =1 (â t − r t )
where rˆt ,i is the forecast form model i , and â t is the forecast from the constant model,
i.e., the historical average (see Campbell and Thompson, 2008). To test whether
differences in RMSE among the models are statistically significant we report p-values
obtained by the model confidence set approach of Hansen et al. (2011). We apply the
model confidence set to each group of models that use the same regressors and are
based on the same sample period.2
The forecasting results in Table 3.9 show that not all in-sample results are confirmed by the out-of-sample results. The linear model has negative OOS-R 2 for all
regressors and subsamples. The unrestricted band spectral regression leads to dismal
forecasting performance, particularly in the crisis sample. The poor performance
was, however, already expected from the AMSE estimates in Figure 3.6 and can be
explained by the overwhelming estimation variance. Additionally, our subsample
analysis indicates that the band spectral coefficients change over time, which further
deteriorates the forecasting performance.
2 The MulCom package version 3.00 for the Ox programming language (see Doornik, 2007) is used
to construct the model confidence sets. The MulCom package is available from http://mit.econ.au.
dk/vip_htm/alunde/MULCOM/MULCOM.HTM. The following settings are used for the model confidence
set construction in MulCom: 9999 boostrap replication with block bootstrapping, block length 44, the
range test for equal predictive ability δR,M , and the range elimination rule e R,M .
3.4. F REQUENCY-D EPENDENT R EAL -T IME F ORECASTS
87
Table 3.9. Results from out-of-sample forecasting for realized variances (RV) and realized
kernels (RK). All models are estimated using a rolling window with R = 440 observations.
Out-of-sample period 2000/11/09–2012/07/31 (2947 observations). We consider the linear
model (Linear), band spectral regression with 12 bands (BSR), and complete subset regression
(CSR) with k = 1,2,3. Root mean squared errors (RMSE) are multiplied by 100. OOS-R 2 is the
out-of-sample R 2 . MCS-p are p-values from the model confidence set. The model confidence
set is calculated for each group of 5 models that use the same regressors and sample.
(a) Full Sample
(b) Pre-Crisis
(b) Crisis
RMSE OOS-R 2 MCS-p RMSE OOS-R 2 MCS-p RMSE OOS-R 2 MCS-p
(a) HAR RV
Linear
BSR
CSR k = 1
CSR k = 2
CSR k = 3
1.154
1.272
1.131
1.136
1.142
-4.852%
-27.322%
-0.786%
-1.693%
-2.701%
0.307
0.090
1.000
0.183
0.111
0.969
0.984
0.968
0.968
0.791
-0.100%
-3.281%
0.009%
-0.031%
-0.123%
0.951
0.021
1.000
0.951
0.765
1.363
1.576
1.319
1.328
1.339
-8.328%
-44.908%
-1.368%
-2.908%
-4.587%
0.314
0.109
1.000
0.191
0.121
1.140 -2.304%
1.222 -17.508%
1.125 0.288%
1.125 0.390%
1.125 0.307%
0.063
0.033
0.835
1.000
0.835
0.973
0.986
0.968
0.969
0.970
-0.946%
-3.802%
-0.030%
-0.122%
-0.273%
0.471
0.001
1.000
0.471
0.350
1.331 -3.297%
1.479 -27.534%
1.306 0.520%
1.305 0.764%
1.305 0.731%
0.095
0.060
0.740
1.000
0.954
1.155
1.390
1.132
1.138
1.146
-5.031%
-52.094%
-0.985%
-2.053%
-3.444%
0.316
0.061
1.000
0.187
0.095
0.969
0.982
0.968
0.968
0.968
-0.149%
-2.764%
0.034%
0.019%
-0.048%
0.961
0.016
1.000
0.961
0.833
1.365
1.796
1.321
1.333
1.348
-8.601%
-88.177%
-1.731%
-3.568%
-5.927%
0.328
0.065
1.000
0.185
0.099
1.140 -2.280%
1.197 -12.858%
1.126 0.211%
1.126 0.258%
1.126 0.142%
0.046
0.010
0.887
1.000
0.797
0.972
0.982
0.968
0.968
0.969
-0.860%
-2.961%
0.023%
-0.016%
-0.114%
0.481
0.003
1.000
0.716
0.480
1.331 -3.319%
1.435 -20.098%
1.307 0.347%
1.307 0.457%
1.307 0.328%
0.075
0.019
0.891
1.000
0.891
(b) LHAR RV
Linear
BSR
CSR k = 1
CSR k = 2
CSR k = 3
(c) HAR RK
Linear
BSR
CSR k = 1
CSR k = 2
CSR k = 3
(d) LHAR RK
Linear
BSR
CSR k = 1
CSR k = 2
CSR k = 3
88
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
The forecasting performance is substantially improved when complete subset regression is used to produce forecasts from the band spectral regression. Even though
the performance is much better than for the unrestricted BSR, for the full sample CSR
only gives a positive OOS-R 2 for the LHAR conditional variance. Complete subset
regression with the HAR model, that does not include leverage effects in the conditional variance, does not produce a positive OOS-R 2 over the full sample. The LHAR
model provides similar performance using realized variance or realized kernels. For
LHAR using realized kernels, CSR significantly outperforms the linear model and BSR
according to the model confidence set p-values, as both the linear model and the
band spectral regression are excluded from the 5% model confidence set for the full
sample.
The performance differences before and after July 2007 are striking. While there is
little evidence of forecast improvements from frequency dependence in the pre-crisis
subsample, the CSR forecasts based on the LHAR model work very well in the crisis
subsample. Thus, taking into account the different effects of variance movement
at different frequencies does improve forecasting during the crisis. These findings
confirm our in-sample results that the linear risk-return model is not well-specified
to describe the risk-return relation in the second subsample.
3.5 Conclusion
In this chapter we document strong evidence of frequency dependence in the relation
of conditional mean and conditional variance for daily returns on the S&P 500. Our
analysis is based on a band spectral regression approach with one-sided filtering,
which is robust to contemporaneous leverage and feedback effects and allows us to
obtain real-time forecast. The findings provide further evidence against a linear riskreturn relation. After July 2007 there is a distinct negative relation at high frequencies
with periods of around one week and less, which is not statistically significant before
the financial crisis. Taking into account this frequency dependence can improve
forecasting performance. Our results suggest that estimates of the risk-return relation
from linear models are both sensitive to the sampling frequency of the data and to
the state of the financial market.
As a consequence of the data sample used in this chapter, our analysis focuses
on fluctuations with monthly and weekly periods. In order to analyze frequency
dependence at lower frequencies, such as business cycle frequencies, different data
must be used, for example, monthly returns with variances from daily returns, for
which time series with much longer time span are available. Due to the focus on high
frequencies, our results are complementary to the literature on asset pricing with
different risk components, such as Adrian and Rosenberg (2008), that uses monthly
returns.
We have largely refrained from structural interpretations of the negative risk-
3.5. C ONCLUSION
89
return relation found at certain frequencies. Volatility feedback effects are typically
assumed to have instantaneous impact, i.e., when expected variance increases the
prices drops immediately. If this is not true, but instead such adjustments take time
in the market, then the volatility feedback effect can explain the negative risk-return
relation that we find. Our findings are also consistent with the empirical evidence
from the literature on bear and bull markets, e.g., Maheu, McCurdy, and Song (2012),
where high volatility is typically associated with bear markets, i.e., periods with
declining prices. However, such bull and bear markets are typically identified as
market regimes with long duration, lasting several months or years, while we have
documented non-linear effects with much shorter duration.
90
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
3.6 References
Adrian, T., Rosenberg, J., 2008. Stock returns and volatility: Pricing the short-run and
long-run components of market risk. The Journal of Finance 63 (6), 2997–3030.
Andersen, T. G., Bollerslev, T., Diebold, F. X., Labys, P., 2003. Modeling and forecasting
realized volatility. Econometrica 71 (2), 579–625.
Ang, A., Hodrick, R. J., Xing, Y., Zhang, X., 2006. The cross-section of volatility and
expected returns. The Journal of Finance 61 (1), 259–299.
Ashley, R., Verbrugge, R. J., 2008. Frequency dependence in regression model coefficients: An alternative approach for modeling nonlinear dynamic relationships in
time series. Econometric Reviews 28 (1-3), 4–20.
Bali, T. G., Peng, L., 2006. Is there a risk–return trade-off? evidence from highfrequency data. Journal of Applied Econometrics 21 (8), 1169–1198.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., Shephard, N., 2008. Designing
realized kernels to measure the ex post variation of equity prices in the presence of
noise. Econometrica 76 (6), 1481–1536.
Barndorff-Nielsen, O. E., Hansen, P. R., Lunde, A., Shephard, N., 2009. Realized kernels
in practice: Trades and quotes. The Econometrics Journal 12 (3), C1–C32.
Black, F., 1976. Studies of stock price volatility changes. In: Proceedings of the 1976
Meetings of the American Statistical Association, Business and Economics Statistics
Section. pp. 177–181.
Bollerslev, T., Litvinova, J., Tauchen, G., 2006. Leverage and volatility feedback effects
in high-frequency data. Journal of Financial Econometrics 4 (3), 353–384.
Bollerslev, T., Osterrieder, D., Sizova, N., Tauchen, G., 2013. Risk and return: Long-run
relations, fractional cointegration, and return predictability. Journal of Financial
Economics 108 (2), 409 – 424.
Bollerslev, T., Zhou, H., 2006. Volatility puzzles: a simple framework for gauging
return-volatility regressions. Journal of Econometrics 131 (1), 123–150.
Campbell, J. Y., Hentschel, L., 1992. No news is good news: An asymmetric model of
changing volatility in stock returns. Journal of Financial Economics 31 (3), 281–318.
Campbell, J. Y., Thompson, S. B., 2008. Predicting excess stock returns out of sample:
Can anything beat the historical average? Review of Financial Studies 21 (4), 1509–
1531.
Christensen, B. J., Dahl, C. M., Iglesias, E. M., 2012. Semiparametric inference in a
garch-in-mean model. Journal of Econometrics 167 (2), 458–472.
3.6. R EFERENCES
91
Christensen, B. J., Nielsen, M. Ø., 2007. The effect of long memory in volatility on
stock market fluctuations. The Review of Economics and Statistics 89 (4), 684–700.
Christie, A. A., 1982. The stochastic behavior of common stock variances: Value,
leverage and interest rate effects. Journal of Financial Economics 10 (4), 407–432.
Corsi, F., 2009. A simple approximate long-memory model of realized volatility. Journal of Financial Econometrics 7 (2), 174–196.
Corsi, F., Renò, R., 2012. Discrete-time volatility forecasting with persistent leverage
effect and the link with continuous-time volatility modeling. Journal of Business &
Economic Statistics 30 (3), 368–380.
Doornik, J. A., 2007. Object-Oriented Matrix Programming Using Ox, 3rd ed. Timberlake Consultants Press and Oxford: www.doornik.com., London.
Elliott, G., Gargano, A., Timmermann, A., 2013. Complete subset regressions. Journal
of Econometrics 177 (2), 357–373.
Engle, R. F., 1974. Band spectrum regression. International Economic Review 15 (1),
1–11.
Engle, R. F., Lilien, D. M., Robins, R. P., March 1987. Estimating time varying risk
premia in the term structure: The arch-m model. Econometrica 55 (2), 391–407.
French, K. R., Schwert, G. W., Stambaugh, R. F., 1987. Expected stock returns and
volatility. Journal of Financial Economics 19 (1), 3–29.
Ghysels, E., Santa-Clara, P., Valkanov, R., 2005. There is a risk-return trade-off after all.
Journal of Financial Economics 76 (3), 509–548.
Glosten, L. R., Jagannathan, R., Runkle, D. E., 1993. On the relation between the
expected value and the volatility of the nominal excess return on stocks. The
journal of finance 48 (5), 1779–1801.
Hansen, P. R., Lunde, A., 2006. Realized variance and market microstructure noise.
Journal of Business & Economic Statistics 24 (2), 127–161.
Hansen, P. R., Lunde, A., Nason, J. M., 03 2011. The model confidence set. Econometrica 79 (2), 453–497.
Harrison, P., Zhang, H. H., 1999. An investigation of the risk and return relation at
long horizons. Review of Economics and Statistics 81 (3), 399–408.
Harvey, A. C., 1978. Linear regression in the frequency domain. International Economic Review 19 (2), 507–512.
92
C HAPTER 3. F REQUENCY D EPENDENCE IN THE R ISK -R ETURN R ELATION
Lettau, M., Ludvigson, S., 2010. Measuring and modeling variation in the risk- return
tradeoff. In: Ait-Sahalia, Y., Hansen, L.-P. (Eds.), Handbook of Financial Econometrics. Vol. 1. Elsevier Science B.V., North Holland, Amsterdam, pp. 617–690.
Maheu, J. M., McCurdy, T. H., Song, Y., 2012. Components of bull and bear markets:
bull corrections and bear rallies. Journal of Business & Economic Statistics 30 (3),
391–403.
Merton, R. C., 1973. An intertemporal capital asset pricing model. Econometrica,
867–887.
Merton, R. C., 1980. On estimating the expected return on the market: An exploratory
investigation. Journal of Financial Economics 8 (4), 323–361.
Murphy, K. M., Topel, R. H., 2002. Estimation and inference in two-step econometric
models. Journal of Business & Economic Statistics 20 (1), 88–97.
Nelson, D. B., 1991. Conditional heteroskedasticity in asset returns: A new approach.
Econometrica, 347–370.
Newey, W. K., West, K. D., 1987. A simple, positive semi-definite, heteroskedasticity
and autocorrelation consistent covariance matrix. Econometrica 55 (3), 703–708.
Pagan, A., 1984. Econometric issues in the analysis of regressions with generated
regressors. International Economic Review, 221–247.
Pagan, A., Ullah, A., 1988. The econometric analysis of models with risk terms. Journal
of Applied Econometrics 3 (2), 87–105.
Rapach, D. E., Strauss, J. K., Zhou, G., 2010. Out-of-sample equity premium prediction:
Combination forecasts and links to the real economy. Review of Financial Studies
23 (2), 821–862.
Rossi, A., Timmermann, A., 2011. What is the shape of the risk-return relation? Working paper, UCSD.
Stock, J. H., Watson, M. W., 1999. Business cycle fluctuations in us macroeconomic
time series. In: Taylor, J., Woodford, M. (Eds.), Handbook of Macroeconomics. Vol. 1.
Amsterdam: Elsevier, pp. 3–64.
Tan, H. B., Ashley, R., 1999. Detection and modeling of regression parameter variation
across frequencies. Macroeconomic Dynamics 3 (01), 69–83.
Welch, I., Goyal, A., 2008. A comprehensive look at the empirical performance of
equity premium prediction. Review of Financial Studies 21 (4), 1455–1508.
Wu, G., 2001. The determinants of asymmetric volatility. Review of Financial Studies
14 (3), 837–859.
3.6. R EFERENCES
93
Yu, J., 2005. On leverage in a stochastic volatility model. Journal of Econometrics
127 (2), 165–178.
DEPARTMENT OF ECONOMICS AND BUSINESS
AARHUS UNIVERSITY
SCHOOL OF BUSINESS AND SOCIAL SCIENCES
www.econ.au.dk
PhD Theses since 1 July 2011
2011-4
Anders Bredahl Kock: Forecasting and Oracle Efficient Econometrics
2011-5
Christian Bach: The Game of Risk
2011-6
Stefan Holst Bache: Quantile Regression: Three Econometric Studies
2011:12
Bisheng Du: Essays on Advance Demand Information, Prioritization and Real Options
in Inventory Management
2011:13
Christian Gormsen Schmidt: Exploring the Barriers to Globalization
2011:16
Dewi Fitriasari: Analyses of Social and Environmental Reporting as a Practice of
Accountability to Stakeholders
2011:22
Sanne Hiller: Essays on International Trade and Migration: Firm Behavior, Networks
and Barriers to Trade
2012-1
Johannes Tang Kristensen: From Determinants of Low Birthweight to Factor-Based
Macroeconomic Forecasting
2012-2
Karina Hjortshøj Kjeldsen: Routing and Scheduling in Liner Shipping
2012-3
Soheil Abginehchi: Essays on Inventory Control in Presence of Multiple Sourcing
2012-4
Zhenjiang Qin: Essays on Heterogeneous Beliefs, Public Information, and Asset
Pricing
2012-5
Lasse Frisgaard Gunnersen: Income Redistribution Policies
2012-6
Miriam Wüst: Essays on early investments in child health
2012-7
Yukai Yang: Modelling Nonlinear Vector Economic Time Series
2012-8
Lene Kjærsgaard: Empirical Essays of Active Labor Market Policy on Employment
2012-9
Henrik Nørholm: Structured Retail Products and Return Predictability
2012-10
Signe Frederiksen: Empirical Essays on Placements in Outside Home Care
2012-11
Mateusz P. Dziubinski: Essays on Financial Econometrics and Derivatives Pricing
2012-12
Jens Riis Andersen: Option Games under Incomplete Information
2012-13
Margit Malmmose: The Role of Management Accounting in New Public Management
Reforms: Implications in a Socio-Political Health Care Context
2012-14
Laurent Callot: Large Panels and High-dimensional VAR
2012-15
Christian Rix-Nielsen: Strategic Investment
2013-1
Kenneth Lykke Sørensen: Essays on Wage Determination
2013-2
Tue Rauff Lind Christensen: Network Design Problems with Piecewise Linear Cost
Functions
2013-3
Dominyka Sakalauskaite: A Challenge for Experts: Auditors, Forensic Specialists and
the Detection of Fraud
2013-4
Rune Bysted: Essays on Innovative Work Behavior
2013-5
Mikkel Nørlem Hermansen: Longer Human Lifespan and the Retirement Decision
2013-6
Jannie H.G. Kristoffersen: Empirical Essays on Economics of Education
2013-7
Mark Strøm Kristoffersen: Essays on Economic Policies over the Business Cycle
2013-8
Philipp Meinen: Essays on Firms in International Trade
2013-9
Cédric Gorinas: Essays on Marginalization and Integration of Immigrants and Young
Criminals – A Labour Economics Perspective
2013-10
Ina Charlotte Jäkel: Product Quality, Trade Policy, and Voter Preferences: Essays on
International Trade
2013-11
Anna Gerstrøm: World Disruption - How Bankers Reconstruct the Financial Crisis:
Essays on Interpretation
2013-12
Paola Andrea Barrientos Quiroga: Essays on Development Economics
2013-13
Peter Bodnar: Essays on Warehouse Operations
2013-14
Rune Vammen Lesner: Essays on Determinants of Inequality
2013-15
Peter Arendorf Bache: Firms and International Trade
2013-16
Anders Laugesen: On Complementarities, Heterogeneous Firms, and International
Trade
2013-17
Anders Bruun Jonassen: Regression Discontinuity Analyses of the Disincentive
Effects of Increasing Social Assistance
2014-1
David Sloth Pedersen: A Journey into the Dark Arts of Quantitative Finance
2014-2
Martin Schultz-Nielsen: Optimal Corporate Investments and Capital Structure
2014-3
Lukas Bach: Routing and Scheduling Problems - Optimization using Exact and
Heuristic Methods
2014-4
Tanja Groth: Regulatory impacts in relation to a renewable fuel CHP technology:
A financial and socioeconomic analysis
2014-5
Niels Strange Hansen: Forecasting Based on Unobserved Variables
2014-6
Ritwik Banerjee: Economics of Misbehavior
2014-7
Christina Annette Gravert: Giving and Taking – Essays in Experimental Economics
2014-8
Astrid Hanghøj: Papers in purchasing and supply management: A capability-based
perspective
2014-9
Nima Nonejad: Essays in Applied Bayesian Particle and Markov Chain Monte Carlo
Techniques in Time Series Econometrics
2014-10
Tine L. Mundbjerg Eriksen: Essays on Bullying: an Economist’s Perspective
2014-11
Sashka Dimova: Essays on Job Search Assistance
2014-12
Rasmus Tangsgaard Varneskov: Econometric Analysis of Volatility in Financial
Additive Noise Models
2015-1
Anne Floor Brix: Estimation of Continuous Time Models Driven by Lévy Processes
2015-2
Kasper Vinther Olesen: Realizing Conditional Distributions and Coherence Across
Financial Asset Classes
2015-3
Manuel Sebastian Lukas: Estimation and Model Specification for Econometric
Forecasting
ISBN: 9788793195110