This post is the first part of a series of post related to the last chapter of my Ph.D. thesis. This post is a part of a paper that you can easily found here (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2556747). It aims to explain the broad context in which I write this article, the underlying motivations of my research as well as the main findings.
This paper addresses an important and fundamental question for investors: the allocation of total wealth among the different possible investment choices. This issue has bothered each investor for many centuries. Already in about the fourth century, a man named Rabbi Issac bar Aha proposed the following asset allocation rule: “One should always divide his wealth into three parts: a third in land, a third in merchandise, and a third ready to hand” (DeMiguel et al., 2009a). Portfolio optimization aims to develop and implement strategies to make tactical asset allocations. At the centre of portfolio optimization, the “idea of diversification is strongly connected with portfolio theory” (Scherer, 2010). The objective of diversification is to subdivide risks so that all your eggs are not in the same basket. Diversification is a risk management technique aiming to mix a wide variety of asset classes within a portfolio (through stocks, bonds, commodities, real estate, but also via an international diversification for example) in order to minimize portfolio risk.
The first major paper on portfolio optimization and diversification is Markowitz (1952)’s seminal paper, being one of the most important developments in finance. The theory popularised in his paper refers to modern portfolio theory, based on the concept of mean- variance. Markowitz shows that the allocation of funds among the different possible investment choices depends on a trade-off between the expected return and the level of risk. A mean-variance optimal portfolio either maximizes the expected mean return for a given level of risk or it minimizes the risk for a given expected return. This trade-off between risk and return was, in a sense, revolutionary for two main reasons (Kolm et al., 2014). First, Markowitz considered risk and return jointly based on a major principle, portfolio diversification. Portfolio risk indeed does not only depend on the risk of its components but also depends on their cross-correlations. Prior to him, the former classical financial analysis only focused on the returns of single investments with an important belief stating to invest in assets offering the highest future returns. The second revolutionary thing about Markowitz’s work relates to the formulation of an optimization problem. Among the large number of portfolios reaching a particular return (risk) objective, the investor should choose the portfolio with the smallest variance (highest return).
Nevertheless, the initial mean-variance framework of Markowitz (1952) is very challenging and presents several important shortcomings, one of which is the estimation of expected returns. They are indeed extremely hard to predict, and they are unstable because they depend on both idiosyncratic and systematic factors. A small example illustrates this statement. Regarding idiosyncratic factors, the success of a product (or service) of a company (and therefore, its potential positive or negative behaviour on financial markets) is usually very difficult to predict. It indeed depends on different factors such as the appropriateness, the quality, the usefulness of the product or the adoption by customers, but also competition, among others. Nevertheless, even if a product or service of a company has a great success, its share price does not necessarily evolve in line with the fundamentals of the company due to systematic factors such as a market bubble crash, a global slowdown or monetary policy actions taken by central banks, for example. In the same vein, expected returns are difficult to predict and several authors indicate the high sensitiveness of portfolio weights to the mean return estimates (Among others, see Best and Grauer, 1991, 1992; Chan et al., 1999; Chopra and Ziemba, 1993; Clarke et al., 2006; Merton, 1980; Michaud, 1989 and Jagannathan and Ma, 2003). Some errors may indeed occur in the estimation of expected returns, which creates a significant deviation from the optimal portfolio. Even small reductions in the mean return of a given asset may shift the asset from a large long position to a large short one (Levy and Levy, 2014). In other words, the differences between the true expected values and the fore- cast ones are true in practice (i.e. there exist some estimation errors), which leads to a large deviation between the optimal investment strategy and the one recommended based on the estimated sample parameters. Moreover, classical mean-variance portfolios have shown poor empirical performance (Among others, see DeMiguel et al., 2009a; Jorion, 1985; Qian, 2006 and Kolm et al., 2014).
There have therefore been important refinements and extensions of the Markowitz approach in the portfolio optimization literature to decrease estimation errors. Among others, a large part of the literature has focused on the Bayesian analysis with the “set up of prior knowledge of the parameters of the distribution of future security returns” (Barry, 1974), but other authors such as Jobson and Korkie (1980) and Jorion (1985) among others, also use Bayesian methods to shrink the estimate of the mean, as well as the estimation of the covariance matrix (Ledoit and Wolf, 2003, 2004). Some authors, such as Michaud (1989), propose to use resampling methods, while others propose a robust portfolio optimization in terms of utility (Goldfarb and Iyengar, 2003; Tutuncu ̈ and Koenig, 2004), or robust portfolio optimization based on additional portfolio risk measures such as the Value-at-Risk or the conditional Value-at-Risk (Garlappi et al., 2007). Another important set of methods also deals with potential decreases of the estimation errors of the variance-covariance matrix (Among others, see Best and Grauer, 1992; Chan et al., 1999 and Ledoit and Wolf, 2004). Other methods also propose to combine different portfolios, such as Kan and Zhou (2007) and Tu and Zhou (2011), among others.
Even though the portfolio optimization literature is rich in refinements and extensions of the Markowitz approach, three main shortcomings need to be highlighted. First, these improvements keep the same framework, i.e. they try to minimize risk for a given targeted expected return (or they try to maximize the expected return for a given targeted risk). By keeping the same framework, estimation errors are still present through two main sources: return and risk estimates. Second, these different extensions may add additional computational burden because, in some cases, investors need to compute solutions across a large set of scenarios, which does not simplify their task (Scherer, 2010). Third, as described by Maillard et al. (2010), investors prefer more heuristic solutions, being computationally simple to develop and implement but also being more robust because they do not depend on expected returns.
In this context and due to their “return-agnostic and risk management features” (Jurczenko et al., 2013), risk-based portfolio strategies have gained in popularity in the asset management industry. The three most famous risk-based strategies are Minimum Variance (MV), Equal Risk Contribution (ERC) and Maximum Diversification (MD), and these strategies do not depend on asset returns’ forecasts and they are based on a single criterion: risk. The interest in estimation procedures relying on a risk measure could be explained by three major factors. First, in recent years, asset managers have been reconsidering the importance and the relevance of portfolio risk management. The different recent crises have indeed shown that asset managers being exposed to stock market indices may have been largely impacted if they did not implement effective risk management practices. Assets managers have had to face much more pressure for transparency and performance from their clients, which has renewed the interest in using risk management tools. Moreover, as pointed out by Bruder and Roncalli (2012), “the job of a fund manager is first of all to manage risk”. Second, security variance and covariance risks are persistent and much more predictable than expected returns. Return variances and covariances are indeed easier to estimate from historical data than stock returns (Among others, see Chan et al., 1999; Merton, 1980 and Nelson, 1992). The third major factor is the “low-volatility anomaly” in which the relationship between risk and return is flat or even inverted (Among others, see Baker et al., 2011; Haugen and Heins, 1975 and Baker and Haugen, 2012). These risk-based strategies have been shown to have preferences for low-volatility and low-beta assets (Jurczenko et al., 2013), which captures one of the most important features in finance, the low-volatility anomaly (Baker et al., 2011). Under the Efficient Market Hypothesis (EMH), investors may only have returns above the average if they take higher risks. Nevertheless, this theory has not been supported by empirical facts because low risk assets usually outperform high risk ones over a long time horizon. This low-volatility anomaly is consistent over time and across different markets (Among others, see Ang et al., 2006, 2009; Baker et al., 2011; Black, 1972; Haugen and Heins, 1975 and Baker and Haugen, 2012). The low-volatility anomaly is usually explained by several behavioural biases: (i) preference for lotteries, i.e., investors prefer investing in high-volatility stocks with a small chance of high gain and a large probability of small loss; (ii) overconfidence, i.e., investors tend to be overconfident regarding their own beliefs; and (iii) representativeness bias, i.e., a bias in favour of historically successful investments. In addition to these behavioural biases, Baker and Haugen (2012) argue that the low-volatility anomaly is also explained by the nature of manager compensation and agency issues (i) between professional investment managers within an organization and (ii) between investment professionals and their clients. More recently, Buffa et al. (2014) examine the nature of fund managers’ contracts and whether these contracts may lead fund managers to become less willing to deviate from certain benchmarks. Their compensation is usually highly sensitive to their performance relative to a benchmark, which creates a situation in which managers do not wish to deviate from the benchmark. Due to these behavioural biases, the nature of manager compensation and agency issues, among other factors, investors strongly favour high-volatility stocks without having undertaken a sufficiently deep analysis of their fundamentals, leading to over-pricing of these stocks and inferior future returns (Baker and Haugen, 2012). In other words, undervalued (overvalued) assets become cheaper (more expensive), which continues to bias the aggregate market upwards and its expected return downwards (Buffa et al., 2014).
Added to this low-volatility anomaly, a large number of authors have also highlighted another major anomaly calling into question the Efficient Market Hypothesis. This anomaly is the “momentum anomaly”. The momentum effect has been emphasized by Jegadeesh and Titman (1993) and is usually considered as one of the most important financial anomalies. Jegadeesh and Titman (1993) indeed find that trading strategies buying past winners and selling past losers realized “significant abnormal returns”, while the inverse strategy (i.e. buying past losers and selling past winners, which is a contrarian strategy) showed the worst performance. Momentum represents the phenomenon that securities having performed well relative to peers continue on average to outperform, and that securities having poorly performed go on underperforming. This momentum effect reflects the relationship between the return of an asset and its recent relative performance history (Asness et al., 2013). Momentum strategies are profitable in most major stock markets worldwide and this outperformance of momentum strategies is consistent over time (Among others, see Jegadeesh and Titman, 1993, 2001; Rouwenhorst, 1998 and Chui et al., 2010). Linked to this concept of momentum, trend following strategies consist of applying some indicators (e.g. moving averages) to detect trading signals, which determines the trend of an asset (Clare et al., 2014). These trend following models have slowly gained recognition in the academic community even though the first major paper on trend following is Brock et al. (1992)’s paper. They show that moving average trading rules have predictive power for future returns, and trend following strategies with moving averages are effective in practice (Among others, see Clare et al., 2014; Faber, 2007, 2013; Hurst et al., 2010 and ap Gwilym et al., 2010).
To explain the predictive power of such timing strategies (i.e. momentum and trend following) on the future behaviour of stock markets, some behavioural effects, such as anchoring, herding, and disposition effects, among others, have been featured in the literature. The concept of anchoring simply means that investors are very slow to react to new information, so there is an underreaction on their part (see Barberis et al., 1998 and Hong and Stein, 1999, among others). “Investors underweight new information when they update their priors” (Jegadeesh and Titman, 2011). Thus, momentum occurs because investors are slow to revise their priors when new information arrives. Then, when they react, other investors simply follow the first movers and adopt herding behaviour – e.g., if people buy a stock, others will follow by buying the same stock, causing the stock price to move above its fundamental value (see, among others, Grinblatt et al., 1995). The disposition effect means that loss-averse investors tend to sell stocks too quickly when they are winners and, conversely, hold stocks too long when they are losers, which reinforces the tendency to anchor (Frazzini, 2006; Shefrin and Statman, 1985).
In this context, in which risk-based strategies and timing strategies have been developed in the literature, the purpose of this paper is to combine the two strategies. This two-step approach consists in applying a timing strategy (either a moving average or a momentum strategy in the first step) followed by risk-based portfolio optimization procedures (second step). In other words, the objective of this paper is to use the predictive power of timing and risk-based strategies to deliver portfolios with better risk- adjusted returns than traditional risk-based portfolios. We compute risk-based and equally weighted (as a benchmark) portfolios with and without timing strategies in the first step for an empirical dataset composed of 18 country MSCI stock market indices. The estimation period ranges from January 1975 to December 2014.
This two-step approach is motivated by several factors. First, the first step allows us to make a prior selection of stock market indices that are likely to continue to outperform compared with stock market indices that exhibit negative trends or low momentum. Second, mixing the two approaches aims to benefit from the pro-cyclical behaviour of the timing strategies (because momentum and trend following strategies largely invest in assets with positive trends) as well as from the counter-cyclical behaviour of risk-based strategies (i.e., these risk-based strategies focus on assets whose returns are more stable over time to protect against the negative impact of short-term volatility). Third, coupling these strategies makes sense because each portfolio strategy works individually (see Section 2). With our two-step approach, we therefore seek to associate different sources of excess returns. Finally, a large part of the investment literature in recent years has focused on combining multiple advanced investment strategies that have proved their worth individually. Among others, Asness et al. (2013) and Blitz and Van Vliet (2008) use value and momentum strategies as complements. DeMiguel et al. (2009a), in one of their portfolios of interest, mix 1/N and Minimum Variance strategies, similarly to other authors, who also combine sophisticated strategies with “naive” ones (see, among others, Tu and Zhou, 2011 and Kan and Zhou, 2007). In the same vein, some authors couple timing strategies. For example, Antonacci (2013) combines relative and absolute momentum to form a dual momentum strategy, while ap Gwilym et al. (2010) investigate whether the use of momentum and trend following can be combined to deliver higher risk-adjusted returns than those of individual strategies. To the best of our knowledge, this paper is the first to shed light on the combination of timing and risk-based strategies.
This paper provides several contributions. First, risk-based strategies have higher returns when a relevant timing strategy (i.e., a moving average or high momentum) is applied than when such a strategy is not applied. The second important contribution of our analysis relies on the significantly lower standard deviations of risk-based portfolios that use a moving average in the first step compared with initial risk- based portfolios. With higher returns and lower volatility, risk-based portfolios have higher risk-adjusted returns when we apply a moving average in the first step of portfolio optimization. Regarding high momentum risk-based portfolios, they have higher volatility than initial risk-based portfolios, but this higher volatility is compensated for by much higher returns than those of initial risk-based portfolios. High momentum risk-based portfolios therefore have larger Sharpe ratios than initial risk-based portfolios. Third, risk-based portfolios coupled with a moving average strategy are characterized by much lower Value-at-Risk (VaR) and Expected Shortfall (ES) levels than initial risk-based portfolios. If we compare risk-based strategies with the 1/N benchmark portfolio within a framework in which a timing strategy is applied in the first step, risk-based portfolios appear to have greater risk-adjusted returns and lower VaR and ES than 1/N portfolios. This finding supports the effectiveness and relevance of such an approach, which suggests outperformance of risk-based portfolios using relevant timing strategies relative to traditional 1/N portfolios. Among these risk-based portfolios, the MD and MV allocation principles usually exhibit the best performance statistics in terms of risk-adjusted returns.
If the reader is interested by the literature review on risk-based strategies as well as on timing strategies, please do not hesitate to download the paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2556747. The next posts will relate to the general explanation of the methodology used for portfolios simulations (here), results of portfolios simulations based on country MSCI stock market indices as well as results of portfolios simulations based on Belgian stocks, respectively.