Deep learning and stock trading

A study undertaken by researchers at the School of Business and Economics at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has shown that computer programs that algorithms based on artificial intelligence are able to make profitable investment decisions. When applied to the S&P 500 constituents from 1992 to 2015, their stock selections generated annual returns in the double digits—whereas the highest profits were made at times of financial turmoil.

In March 2016, South Korean Lee Sedol, one of the best Go players in the world, lost to the AlphaGo computer program. It was a milestone in the history of artificial intelligence because up to that point the Asian board game had been considered too complex for computers. Behind successes such as this are programs that are modelled on biological systems and are constructed in a form similar to neural networks so that they can independently extract relationships from millions of data points. 'Artificial neural networks are primarily applied to problems, where solutions cannot be formulated with explicit rules,' explains Dr. Christopher Krauss of the Chair for Statistics and Econometrics at FAU. 'Image and speech recognition are typical fields of application, such as Apple's Siri. But the relevance of deep learning also increases in other domains, such as weather forecasting or the prediction of economic developments.'

Analysing capital market data

The international team headed by Christopher Krauss—consisting of Xuan Anh Do (FAU) and Nicolas Huck (ICN Business School, France)—were the first researchers to apply a selection of state-of-the-art techniques of artificial intelligence research to a large-scale set of capital market data. 'Equity markets exhibit complex, often non-linear dependencies,' says Krauss. 'However, when it comes to selecting stocks, established methods are mainly modelling simple relationships. For example, the momentum effect only focuses on a stock's return over the past months and assumes a continuation of that performance in the months to come. We saw potential for improvements.' To find out whether automated learning processes perform better than a naïve buy-and-hold strategy, researchers studied the S&P 500 Index, which consists of the 500 leading US stocks. For the period from 1992 to 2015, they generated predictions for each individual stock for every single trading day, leveraging deep learning, gradient boosting, and random forests.

Outperformance with machine learning

Each of these methods was trained with approximately 180 million data points. In the course of this training, the models learned a complex function, describing the relationship between price-based features and a stock's future performance. The results were astonishing: 'Since the year 2000, we observed statistically and economically significant returns of more than 30% per annum. In the nineties, results were even higher, reflecting a time when our machine learning approaches had not yet been invented,' adds Krauss. These results pose a serious challenge to the efficient-market hypothesis. Returns were particularly high during times of financial turmoil, for example the collapse of the dot-com bubble around the year 2000 or the global financial crisis in 2008/2009. Dr Krauss: 'Our quantitative algorithms have turned out to be particularly effective at such times of high volatility, when emotions dominate the markets.'

Deep learning still has greater potential

Christopher Krauss urges caution, however, insisting that this is not necessarily the Holy Grail of capital market trading: 'In the latter years of the study profitability fell and even dipped into the negative at times. We assume that this decline was driven by the rising influence of artificial intelligence in modern trading - facilitated by increasing computing power and the popularisation of machine learning. We assume that the downturn may have been caused by the increasing influence of artificial intelligence in modern trading. However, the researchers agree that deep learning still has significant potential: 'We are currently working on very promising follow-up projects with far larger data sets and very deep network architectures which have been specifically designed for identifying temporal dependencies,' explains Krauss. 'Initial results have already shown significant improvements in the forecasting quality - also in recent years.'

Explore further: Is reliable artificial intelligence possible?

More information: Christopher Krauss et al, Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500, European Journal of Operational Research (2017). DOI: 10.1016/j.ejor.2016.10.031