March 31 - April 1 2026 | New York Marriott, Brooklyn Bridge

Decode the Market. 
Build the Future.
Capture the Alpha.

Beyond the Backtest: Vince Chen on Building Durable Multi-Factor Portfolios

Spotlight Interview: Vince (Qijun) Chen, CFA, VP, Public Equity & Portfolio Management at Abacus Global Management

When constructing multi-factor portfolios, what practical methods do you find most effective for ensuring true signal de-correlation, and how do you determine whether two signals are genuinely orthogonal versus merely uncorrelated in-sample?

Statistical tests matter, but the more durable filter is economic. Before asking whether two signals are correlated in the data, ask whether they're answering the same investment question. Many "multi-factor" portfolios are really one bet expressed multiple ways - five flavors of quality, or momentum measured over slightly different lookback windows.

The practical test I find useful: if Signal B pushes you away from a name that Signal A likes, can you articulate why in fundamental terms? If high-momentum names get screened out by a mean-reversion signal, that's genuine tension between two views of how markets work. If high-ROE names get screened out by high-ROIC, you're probably measuring the same thing with different accounting line items.

Genuine diversification comes from combining signals with different economic drivers and different failure modes. Value and momentum work well together not because they're statistically uncorrelated in every sample, but because they fail at different times and for different reasons.

Overfitting risk rises as we scale signals across asset classes. What validation frameworks, data regimes, or cross-asset testing techniques have proven most reliable in distinguishing durable edge from statistical noise?

The best defense against overfitting is a strong prior. If you can't explain why a signal should work - in terms of fundamental economics or investor behavior - you should be skeptical regardless of how impressive the backtest looks.

This is why I gravitate toward factors with clear economic rationale. Quality works because well-managed businesses with durable competitive advantages compound value over time. Value works because investors systematically overpay for excitement and underpay for boring. Momentum works because information diffuses slowly and trends persist longer than efficient-market theory suggests. These aren't guaranteed to work every period, but the mechanisms are plausible and persistent.

The validation I trust most is less about statistical sophistication and more about intellectual honesty. Out-of-sample testing is table stakes, but the real discipline is resisting the temptation to "fix" a strategy when it underperforms. Every iteration on the same dataset is a step toward overfitting, even when each change seems reasonable.

Long-short factor portfolios can behave unexpectedly when signals interact. How do you evaluate factor interaction effects - both amplifying and offsetting - and decide whether to combine, constrain, or separate them?

Factor interactions are where live portfolios diverge from backtests, often because interaction effects are regime-dependent in ways that are difficult to model in advance.

Consider value and quality. Cheap, high-quality names are the intersection everyone wants. But cheap, low-quality names behave very differently depending on context - in a recovery, they can be the biggest winners as distressed assets to re-rate; in a recession, they're the ones that go to zero. The right way to handle that interaction depends on where you are in the cycle, which makes it a judgment call rather than a parameter to optimize.

My bias is toward transparency over optimization. I'd rather understand how each factor contributes to the portfolio - even if that means accepting some redundancy - than engineer for an "optimal" combination that becomes a black box. When factor interactions are dynamically weighted, you often end up fitting to the last regime rather than preparing for the next one.

As signal libraries evolve beyond momentum and mean reversion, what characteristics define a "portfolio-worthy" signal today - stability, interpretability, turnover efficiency, macro sensitivity, or something else?

Interpretability is underrated. A signal you can explain is a signal you can stick with during drawdowns - and every signal has drawdowns. If a complex ensemble underperforms for eighteen months, how do you know whether to stay in the course or cut your losses? Understanding the economic mechanism lets you distinguish between a broken thesis and temporarily mispriced assets.

Beyond interpretability, turnover efficiency matters enormously. A signal that requires 200% annual turnover is a signal whose edge gets consumed by transaction costs and market impact. The academic factor literature is full of strategies that exist only in frictionless backtests.

Macro sensitivity cuts both ways - some investors want factors that are macro-neutral; others want known macro exposures they can tilt through the cycle. Neither approach is inherently superior, but understanding how your signals behave across different environments is essential regardless of which path you choose.

Looking forward, do you see the next generation of multi-factor models being driven more by better signal engineering, improved portfolio construction techniques, or by smarter fusion approaches that dynamically weight and adapt signals?

I'm skeptical of the "smarter fusion" path as typically implemented. Dynamic signal weighting sounds appealing - overweight what's working, underweight what isn't - but in practice it often devolves into chasing recent performance and overfitting the last regime. Markets don't announce regime changes in advance.

My view is that durable progress comes from less glamorous places: better understanding of transaction costs and market microstructure, more thoughtful tax management, and intellectual honesty about what's genuinely alpha versus beta in disguise. Much of what gets labeled "factor alpha" is really compensation for bearing risk or providing liquidity - valuable, but different from how it's often marketed.

On signal engineering, the frontier is less about discovering entirely new factors and more about measuring existing ones with greater precision. Quality, for instance, encompasses accounting stability, competitive moats, and capital allocation discipline - these overlap but aren't identical. Being precise about which dimension you're targeting probably matters more than adding a fifteenth factor to the model.

The enduring edge in this business is temperament as much as technique: the discipline to stick with a sensible process when it's out of favor, and to resist the temptation to "improve" your model every time it underperforms.

Don’t miss Vince on Day 2 at Future Alpha 2026, 2 PM on the AlphaX Stage for the panel: ‘Signal Fusion in Multi-Factor Models – Beyond Momentum and Mean Reversion.’

Book your pass now!