02-16-2022 · Interview

‘The main goal was to test for equity factors using an out-of-sample dataset’

Creating novel databases for out-of-sample testing adds real value as people seldom take the time to perform such a task. We discuss this and other topics with quant investment specialist Bart van Vliet.

    Authors

  • Bart van Vliet - Investment Specialist

    Bart van Vliet

    Investment Specialist

  • Lusanele Magwa - Investment Writer

    Lusanele Magwa

    Investment Writer

What did you set out to achieve with your groundbreaking research and did you encounter any surprises?

“The main goal was to test for equity factors using an out-of-sample dataset that covered a period that no one had looked at before. More specifically, we were curious to see if the documented factor patterns in the post-1926 era also held up in the preceding 61 years. At the onset and throughout the duration of the project, we expected to face a number of data problems given the nature of the exercise and this turned out to be the case. That is why the research1 paper took us over five years to wrap up.”

“To give you some context, even before I got involved in the project during my internship, two students from Erasmus University were working on the data. Luckily for me, they did a lot of the heavy lifting. So by the time I looked at it, I already knew which stock exchanges to ignore and which stocks to exclude from our sample data. But the most difficult aspect in my opinion was accounting for liquidity.”

“For example, we had to think about how to consider stocks with small market capitalizations that traded infrequently. If we took the equally-weighted sorting approach, then a bank with market cap of USD 1 million would have the same weight as a railroad business with a market cap of USD 500 million. To solve for this bias, we manually captured company market cap data that we sourced from digitized old newspapers. Overall, our biggest challenge was making sense of the data and this definitely brought about a few surprises along the way.”

We manually captured company market cap data that we sourced from digitized old newspapers.

You just touched on it, but can you give us more insight of how much work this project entailed?

“Myself and the two students from Erasmus University clocked up countless hours on the project. They were instrumental in creating the dataset and ensuring that it was clean, which made my task of adding market cap data a lot easier. I remember spending weeks just looking at newspapers and capturing values in numerous Excel sheets. At some points, I really felt like throwing my keyboard away given how time-consuming the task was.”

“The FRASER digital library has an archive of all the Commercial & Financial Chronicle newspapers from the 1860s to the 1960s, which I became very familiar with. Our research starts from 1866 as we located the first market cap datapoints in December 1865. These newspapers contain historical market data such as stock outstanding and par values. So as a part of my role in the project, I manually captured over 60 years’ worth of data. And this effort really adds value because no one takes the time to collect data manually.”

What were your main findings once you tested the data for equity factors?

“The key takeaway was that the results validate the research that has been done over the 1926 to 2020 period. There is a lot of evidence in the academic literature that attributes the existence of established equity factors to behavioral biases. In our analysis, we found similar patterns relating to factor premiums in the pre-1926 era. In our view, it was not strange to come across these results as human behavior does not change overnight. In fact, it probably doesn’t over decades or even centuries.”

“Another interesting observation was that markets were quite efficient back in the 19th century. Based on our own analysis and other academic studies, we saw that the transaction costs were not as high as we initially expected. It is normal to assume that markets are much more efficient nowadays given that we have daily trading and market makers. But in reality, this was not necessarily the case, at least not to the extent that is assumed. On a lighthearted sidenote, I wasn’t really fond of history while I was in high school. But I soon found out that 30% of this project was based on history and the other 70% on economics. That being said, I really enjoyed the whole process — even the history. In fact, I found myself speaking about 19th century railroad companies in conversations with my friends.”

Was there any reason why you only tested for the beta, momentum, short-term reversal, size and value factors?

“To begin with, we took into account the nature of our database to determine which equity factors we could actually test for given the information on hand. Thereafter, we set out to limit our degrees of freedom. We therefore refined our list by only focusing on established academic factors and this led us to our final selection. For value, we used dividend yield as a proxy as there are no book-to-market values for that era. This is because companies were not obligated to report on such information before the 1930s. While a few did in the 1920s, there are not enough cross-sectional observations for testing purposes.”

“The common element with all the factors we tested is that they are return driven. So you can assess them by just looking at either total returns, price returns or dividend yields. If you look at quality, for instance, the data related to the characteristics that define it are only readily available from 1963 onwards, let alone the pre-1926 period. So, a combination of focusing on key established factors as well as taking into account certain constraints led us to our choice of factors.”


Behavioral biases are largely behind the existence and persistence of established equity factors.

How do view your results as investors, especially since they refer to a very different economic era?

“For one, it reaffirms our existing beliefs as long-term quant investors. In our view, behavioral biases are largely behind the existence and persistence of established equity factors. Therefore, seeing the same patterns in the 19th century suggests there is strong evidence of this. The results also underline that factor premiums are not very dependent on specific market regimes, nor specific market structures. Moreover, the era is not as different as we think. It was characterized by technological disruptions and the stock market played an important role in financing the innovation. This is somewhat similar to what we have seen in recent times.”

You also used machine learning techniques to test for equity factors in your research. What were the main learnings from this?

“Machine learning techniques are typically used on broad datasets with lots of variables. A prominent academic paper2 on machine learning demonstrates that they can take into account 100 or so predictive variables to construct portfolios with good risk-return characteristics. But what we also see is that when we apply these techniques to our pre-1926 database, which has a smaller cross-section than what we have become accustomed to nowadays, they also produce good results.”

“This outcome is interesting given that it is based on out-of-sample data that was not available beforehand. This really signals the potential these methods have. Another interesting observation was that these techniques picked the same predictive variables for the pre-1926 era as they did for the 1926 to 2020 period. Indeed, the same academic study2 shows that the random forest technique allocates the highest weighting to dividend yields, while the neural network approach doesn’t. And this is the same result we got when we analyzed our dataset. This is quite remarkable in our view.”

Stay informed on Quant investing

Receive our Robeco newsletter and be the first one to get the latest insights, or build the greenest portfolio.

Stay updated

And to close off, are there any other important takeaways from your research that you would like to highlight?

“The most important takeaway is that aside from creating a high quality dataset, we wanted to ensure that it was of high economic quality. We achieved this by adding market cap data and applying liquidity screens as there are a lot of small companies that trade infrequently. One of the key objectives was to put together a dataset that resembles an investable universe from a practical sense. So when we applied these filters, we saw that some factor premiums became smaller, more specifically for the size factor. This is intuitive as our screening process excludes a lot of small companies from our dataset.”

“Although this shrinks our sample, we believe it is important to take this approach as the results also consider the typical liquidity constraints that investors face. We think this is even more appropriate to take into account for the pre-1926 era, given that there were other limitations affecting trading activity back then. So, our results are based on stocks that had a fair amount of liquidity, which makes the outcomes more meaningful in our opinion.”