ML in Motion: How Kathryn Zhao is Redefining Global Electronic Trading Execution
Spotlight Interview with Kathryn Zhao, Global Head of Electronic Trading.
As someone who has been leading global electronic trading for almost two decades, where are you seeing advanced ML-driven models most materially outperform traditional signals across equities, FX, and digital assets and how is this influencing how institutional clients structure their execution strategies?
Across equities, FX, and digital assets, ML-driven models are most clearly outperforming traditional rule-based signals in short-horizon execution decisions. The strongest gains are coming from dynamic liquidity sourcing, adaptive market impact estimation, and microstructure-aware routing logic. Unlike traditional signals that rely on static historical averages, advanced ML models can dynamically identify transient liquidity across different venues.
Instead of relying on static participation rates or fixed schedules, institutional clients are increasingly adopting adaptive execution frameworks that continuously recalibrate urgency, venue selection, and order routing based on evolving order-book conditions. This is leading to more outcome-oriented execution mandates, where success is measured by realized implementation shortfall and consistency across regimes rather than by adherence to predefined trading styles.
Institutional clients are increasingly structuring mandates that allow algorithms greater discretion to deviate from volume curves when the model detects favorable, albeit fleeting, liquidity conditions, effectively treating execution as an alpha-generating activity rather than a utility.
You’ve spoken previously about the challenges of fragmented liquidity and rapidly shifting market regimes. In your view, what types of machine-learning approaches such as reinforcement learning, adaptive models, or deep feature extraction are proving most effective in high-volatility, low-structure environments?
Reinforcement Learning (RL) is proving exceptionally effective in these low-structure, high-volatility regimes. Traditional regression models often fail when market correlations break down, but RL agents are designed to learn optimal policies through continuous interaction with the market state. In practice, this means an algorithm can "learn" to reduce participation rates when it detects the microstructure precursors of a volatility spike, rather than waiting for a trailing indicator. RL techniques are increasingly being applied to execution control problems, such as pacing and routing, but are typically deployed within bounded, risk-controlled environments.
Deep feature extraction is also critical. It allows us to filter out the "noise" of high-frequency quote stuffing to isolate genuine institutional interest, which is vital when liquidity is scarce. In fast-changing and fragmented markets, models that combine deep feature extraction with adaptive learning have proven most resilient. Deep representation learning helps uncover non-linear relationships in sparse and noisy order-book data, while adaptive and ensemble-based approaches allow models to recalibrate as regimes shift.
The most effective systems are not single “black box” models, but layered frameworks that blend multiple signals and incorporate continuous validation against changing liquidity and volatility regimes.
In electronic execution, ML-based decisions are increasingly moving from signal generation to real-time trade automation. From your perspective, what are the realistic boundaries of algorithmic autonomy today, and where does human oversight remain indispensable in maintaining risk, control, and market integrity?
The realistic boundary today lies at regime recognition. While algorithms are superior at tactical execution, such as intraday execution control, order slicing, venue routing, and spread capture, they historically struggle to identify fundamental regime shifts caused by "black swan" events or geopolitical shocks where historical training data is irrelevant.
Human oversight remains indispensable for regime classification, parameter governance, model risk management, setting the strategic intent and acting as a circuit breaker during these anomalies. Decisions involving large notional exposure, unusual market dislocations, and policy-driven constraints still require experienced human judgment.
In practice, the most robust operating models treat ML as a high-performance decision engine within a broader governance framework that preserves accountability, transparency, and market integrity.
Electronic trading operates across global markets with distinct microstructure characteristics. How is machine learning reshaping execution quality, liquidity discovery, and market-impact modelling in these diverse environments and which innovations are proving most transformative for clients?
The most transformative innovation is real-time impact modeling. Historically, market impact models were static curves estimated from past data. Today, we utilize ML to predict impact dynamically based on the current state of the order book and immediate order flow toxicity. This allows us to tailor execution to the specific microstructure of a venue, whether it’s an equity lit exchange or a crypto liquidity pool. By customizing these parameters to a client's specific trading characteristics, we can significantly minimize signaling risk and leakage of information.
Machine learning also improves execution quality by enabling more precise detection of latent liquidity and finer control of trading trajectories across diverse market structures. In markets with fragmented liquidity, ML-driven venue selection and timing models help uncover liquidity that may not be visible through traditional aggregated feeds.
These advances are proving particularly transformative for clients seeking consistency of execution outcomes across regions, asset classes, and volatility regimes.
As you prepare to moderate this panel, what do you think are the most misunderstood aspects of “machine learning in action” within the trading ecosystem, and what critical questions should asset managers be asking to separate real alpha-generating capability from marketing hype?
The most misunderstood aspect is the belief that ML is a "crystal ball" for price direction; in reality, its greatest value in execution is as a probability engine for liquidity. ML systems are not turnkey alpha engines but rather continuously governed decision frameworks.
Asset managers should be asking how models behave on out-of-sample data, how models are validated across regimes, how explainability and auditability are addressed, and how model risk is monitored over time. They should also focus on whether performance improvements are persistent and measurable in live trading, not just in backtests. If a provider cannot explain why an algorithm accelerated execution during a specific timeframe, it creates an unacceptable compliance risk.
True alpha capability is demonstrated not just by backtested performance, but by the transparency and robustness of the model's decision-making logic in live, stressed markets.
Don’t miss Kathryn on Day 1 of Future Alpha 2026, speaking on the panel: Machine Learning in Action: The Next Frontier of Asset Intelligence.