“It took a long while. Kewei, Chen, and I first documented some of the evidence when we were working on our q-factor paper back in 2014.1 At the time, we coded up about 80 anomaly variables, but only 35 were significant. In particular, 12 out of 13 liquidity variables failed to hold up. The editor of our article, Professor Geert Bekaert, deserves a huge amount of credit for guiding our q-factor paper and letting it see the light of day. While editing our work, Geert told us that he found our evidence that so many well-known anomalies are insignificant very important, and wanted us to highlight it more. We did. But since the objective of that article was to establish a new workhorse factor model, we did not make the evidence the centerpiece of the article.”
“Back in 2015, Eugene Fama and Kenneth French responded to our q-factor paper by incorporating two factors that resemble our investment and return on equity factors in the q-factor model into their three-factor model to form a five-factor model.2 And the Factors War was on. We quickly fired back with the working paper ‘A comparison of new factor models’, which compares our q-factor model with their five-factor model on both conceptual and empirical grounds.3 Our key evidence is that the q-factors subsume their CMA and RMW factors, but their factors cannot subsume ours in factor spanning tests.”
“Alas, that paper met with considerable resistance in the editorial process. Knowing very well what it takes to debate with Fama and French on their home turf, we set out to clear a higher hurdle with respect to incremental contribution, by replicating virtually all of the published literature about anomalies. Our initial thought was to compile the largest set of testing portfolios to test factor models, and to hold up our work against the competitive pressure from Fama and French.
“The tremendous amount of respect we have for Fama and French is borne out in the massive effort we put into ‘Replicating anomalies.’ It is probably worthwhile pointing out that we did not set out to beat down the literature on anomalies. We were focusing on the right-hand, not the left-hand side of factor regressions. After three years of coding, it finally dawned on us that most anomalies fail to hold up, 64% to be precise. The evidence is undeniable.”
“Professor Chen Xue at the University of Cincinnati is the real hero behind our ‘Replicating anomalies’.4 I went through the published anomalies literature, and wrote a first draft of our data appendix. I knew a lot of the classic anomalies, but needed a refresher course on those documented in the past ten years, so it was not time-consuming for me. It was Chen who painstakingly coded up all 447 anomalies, one-by-one, making sure that we followed the variable definitions in the original studies, and when our replication results differed from those originally reported, making sure we understood why. Professor Kewei Hou went through Chen’s SAS programs to ensure that our empirical execution was of the highest possible quality.”
“In our replication, we emphasized a reliable set of empirical procedures that use NYSE breakpoints and value-weighted portfolio returns. This set of procedures is more reliable because it better captures the economic importance of an anomaly. For comparison, in our June 2017 draft, we also reported results from NYSE-Amex-NASDAQ breakpoints and equal-weighted returns, a procedure that gives microcaps excessive weights. We are currently compiling results from a variety of additional procedures, including cross-sectional regressions.”
“The main conclusion is that most anomalies fail to replicate. To be precise, only 36% of the anomalies in our large universe withstood the replication tests. The survival rate is largely in line with those reported in other scientific disciplines such as psychology and oncology.”
The challenge is to figure out which factors are the most relevant to forecast returns
“Not at all. First, the line between active and passive strategies has blurred substantially in the past decade. In the old days, ‘passive’ literally meant holding the market portfolio, and ‘active’ meant everything else. Nowadays, ‘passive’ refers to predetermined algorithm-based strategies, and ‘active’ means there is more human involvement, I think. One may argue that factor investing built on the cross-sectional predictability in finance literature is passive in nature, according to the new definition.”
“Regardless of the passive-active dichotomy, our work does not discredit factor investing at all. On the contrary, we document reliable cross-sectional predictability in a universe in which frictions seem to play a negligible role. When you take 36% of 447, you still get 161 significant anomalies even in value-weighted returns. We show that our latest factor models still leave as many as 46 anomalies unexplained. In short, the future of factor investing is bright! The challenge is to figure out which factors are the most relevant to forecast returns, and that’s the essence of the new ‘active’.”
1K. Hou, C. Xue, and L. Zhang 2015, ‘Digesting anomalies: An investment approach’, Review of Financial Studies 28, 650-705.
2E. F. Fama, and K. R. French, 2015, ‘A five-factor asset pricing model’, Journal of Financial Economics 116, 1-22
3K. Hou, C. Xue, and L. Zhang, 2014, ‘A comparison of new factor models’, NBER Working Paper No. 20682, November 2014.
4K. Hou, C. Xue, and L. Zhang, ‘Replicating anomalies,’ NBER Working Paper No. 23394, May 2017.
BY CLICKING ON “I AGREE”, I DECLARE I AM A WHOLESALE CLIENT AS DEFINED IN THE CORPORATIONS ACT 2001.
What is a Wholesale Client?
A person or entity is a “wholesale client” if they satisfy the requirements of section 761G of the Corporations Act.
This commonly includes a person or entity: