Friday, March 6, 2026

The query that reveals weak quantum models

What institutional investors should ask before investing in systematic strategies

Your due diligence process for quantitative managers will likely concentrate on performance: backtests, Sharpe ratios, drawdowns and attribution. It almost actually doesn’t check whether the variables are structured accurately in relation to the economic forces they’re intended to capture.

This gap shouldn’t be small. It is maybe the biggest undiagnosed source of risk in systematic strategy assessment today. This piece gives you an issue that concludes it. It requires no prior technical knowledge and may be used at your next manager meeting.

The pattern

Three referring physicians at three different institutions described the identical scenario to me inside per week. A scientific equity manager added a “quality” overlay to a worth strategy. The backtest has improved: higher Sharpe ratios, lower drawdowns, cleaner attribution. The allocation takes place. Twelve months later, the strategy is worse than the simpler value-only version that the allocator replaced.

All three allocators concluded that their managers had overfitted the model to historical data. But that diagnosis couldn’t fully explain what went incorrect.

Quality rating was not an independent variable. It was a results of the identical forces that drive the return. Including didn’t add any information. This resulted in a distortion that made the backtest look higher precisely since it made the model structurally worse.

They call this a “factor mirage.” López de Prado later translated these findings right into a blog post for practitioners.

Where current frameworks stop

But even the perfect existing frameworks concentrate on what a model does and the way it was created. You don’t ask why the variables are structured the way in which they’re. Industry-standardized due diligence questionnaires (DDQs) ask which aspects a manager uses and the way he defines them. You don’t ask why these and other variables were deliberately excluded. This gap hides specification errors.

A matter that changes the conversation

The value of the query lies in what it reveals. You’re not asking for an inventory of variables. They query whether the inclusion and exclusion decisions were based on economic considerations and not only statistical fitness.

In my conversations with allocators and managers, the answers fall into three different categories.

A robust answer: The manager explains the economic mechanism behind the inclusion of every variable. Crucially, they discuss variables they excluded and why, and show that the specification was a conscious design decision. They distinguish between variables that control their goal factor and variables that result from it. The strongest managers trace a sequence of economic causality: how macroeconomic forces affect stock-level signals and why the model reflects these causal chains slightly than in search of correlations.

A regular answer: The manager states statistical criteria: information ratio, R-squared improvement, significance tests. This is common industry practice. It’s not incorrect, however it’s incomplete. Statistical fitting alone cannot distinguish between a variable that belongs to the model and a variable that introduces bias while improving the fitting metrics. This is strictly the trap within the opening story.

A worrisome response can take certainly one of two forms: “We use all available variables and let the model choose” signals a structural vulnerability to factor illusions. On the opposite hand, the statement “Our variable selection process is proprietary” could also be an expression of legitimate mental property protection. But a manager who cannot explain the explanations for his specification, even when he doesn’t disclose certain variables, cannot reveal that the explanations exist.

Why this is essential now

The total portfolio approach (TPA) centralizes factor transparency. The largest pension funds now require that every mandate be expressed in a standard factor language. When your entire portfolio must be comprehensible on the factor level, the causal validity of those models directly impacts capital allocation and risk budgeting.

Factor returns fall. Document by McLean and Pontiff (2016). a 50-58% decline in factor returns following academic publication. As more capital chases published aspects, the difference between a well-specified model and a mirage becomes the difference between residual alpha and expensive noise.

The most sophisticated allocators already take this into consideration. ADIA Lab pledged dedicated funding, a $100,000 annual research award, and a world challenge that attracted nearly 2,000 researchers to causal inferences in investments.

Before your next meeting

Ask an issue about why the variables are there and why others will not be. The quality of the reply tells you more concerning the structural soundness of a quantitative process than any backtest.

This is the primary of 4 specification risk dimensions I examine more broadly how managers diagnose performance deficiencies, whether or not they can explain certain trades, and the way sensitive their models are to structural changes. But the specification comes first because if the variables are incorrect, nothing downstream can fix the issue.

This is one dimension of a broader specification risk framework, alongside the way in which managers diagnose performance deficiencies, explain specific trades, and reply to structural changes.

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here