March 31 - April 1 2026 | New York Marriott, Brooklyn Bridge

Decode the Market. 
Build the Future.
Capture the Alpha.

The Limits of Learning: The Past, Present, and Future of ML in Financial Markets

Speaker Q&A with Prof Bryan Kelly, Head of Machine Learning at AQR Capital Management.

Has machine learning fundamentally changed asset pricing, or has it just made us faster at rediscovering the same risk premia?

Asset pricing has been one of the most sophisticated empirical sub-fields of economics for a long time. Machine learning raises asset pricing to a new level of empirical effectiveness. We have high-dimensional machine learning asset pricing models that are achieve significantly better out-of-sample empirical performance (measured in terms of portfolio performance and in terms of pricing errors) compared to the previous generation of low-dimensional models. These ML models are primarily "reduced form" in nature - they are statistical constructs with relatively little economic structure. The next step forward is to marry the empirical progress of ML asset pricing models with structural economic foundations to better understand the mechanisms that underpin behaviours of investors and market prices.

Traditional econometrics prizes interpretability. Machine learning often prioritizes prediction. In asset management, which ultimately wins explanation or accuracy?

Practical efficacy and scientific understanding are distinct yet related goals of economic models. If we are able to build highly effective prediction models, it arms all variety of economic actors (investors, consumers, firm managers, policymakers, ...) with better tools to make profitable allocation decisions in the face of uncertainty. Economics is an applied science, and even pure prediction has great value to the applied side of the discipline. But we want to understand the deep driving forces behind economic phenomena, and for this we need interpretable models that have something to say about economic mechanisms and causality. This not only improves our intellectual understanding of the world around us, but further enhances the ability of aforementioned actors to make sound economic decisions.

If every major fund is now running similar machine learning architectures on similar datasets, where does true edge come from in 2027?

Investors need to be both good statisticians and good economists. To have an edge, an investor needs to find the right marriage between structure (economic priors) and machine learning, and this is a balancing act that can only be achieved with careful model design and disciplined research processes. In other words, I think it is far from given that all funds use similar ML architectures. There is likely to be a lot of heterogeneity in design choice due to differences in economic beliefs among investors and skill differences in designing effective models.

Has machine learning reduced market risk by improving forecasting or increased systemic risk by synchronising models?

Asset management is a time series problem, and as such it faces hard constraints on the amount of data available to train models. The longer the investment horizon, the more severe is this "small data" problem. Data scarcity brings with it what my coauthors and I refer to as "limits to learning." The meaning of this phrase is that even the most sophisticated machine learning models cannot learn the true data generating process due to scarcity of training data, thus they learn a noisy shadow of the true model. In data scarce conditions, it is possible for many different investors to devise models that are all useful for prediction, but imperfectly so. And as long as researchers have heterogeneity in their model designs (think of this as different Bayesian priors), then we can arrive at an equilibrium in which many imperfect, partially overlapping, yet complementary models survive. It is the limitation on the amount of time series data that limits the extent of convergence in competing models. Different machine learning models will naturally be correlated with one another since they are both picking up different shadows of the same truth, but they need not be fully synchronized or systemically risky.

What’s the biggest misconception institutional investors still have about applying machine learning to markets?

I believe that the benefits of ML for investing are unambiguous, but they are evolutionary rather than revolutionary. This is exactly due to the "small data" reality that I referenced above. I often hear the misconceived view that ML will “solve” markets and dramatically alter the industrial organization of asset management. This root of this misconception is that ML models can ascertain the true data generating process. This is not possible due to the limits to learning point discussed a moment ago.

Don’t miss Bryan’s opening keynote on Day 1 of Future Alpha 2026 at 9 AM: Machine Learning, Market Risk, and the Future of Asset Pricing.

Secure your pass here!