Friday, March 6, 2026

Attention bias in AI-driven investing

The advantages of using artificial intelligence (AI) in investment management are obvious: faster processing, broader information coverage and lower research costs. However, there may be a growing blind spot that investment professionals shouldn’t ignore.

Large language models (LLMs) are increasingly influencing the way in which portfolio managers, analysts, researchers, quants and even chief investment officers synthesize information, generate ideas and make trading decisions. However, these tools learn from the identical financial information ecosystem, which is itself highly biased. Stocks that attract more media coverage, analyst attention, trading volume and online discussions dominate the information on which AI is trained.

As a result, LLMs may systematically favor large, popular firms with stock market liquidity, not because the basics warrant it, but because the eye warrants it. This introduces a brand new and largely unrecognized source of behavioral bias in modern investing: bias embedded within the technology itself.

AI Predictions: A Mirror of Our Own Bias

LLMs collect information and learn from texts: news articles, analyst commentary, online discussions, and financial reports. But the financial world doesn’t generate texts evenly across all stocks. Some firms are talked about continuously, from different perspectives and by many voices, while others only appear occasionally. Large firms dominate analyst reports and media coverage, while technology firms grab headlines. Highly traded stocks generate ongoing commentary and meme stocks attract lots of attention on social media. As AI models learn from this environment, they absorb these asymmetries in reporting and discussion, which may then be reflected in forecasts and investment recommendations.

Current research suggests exactly that. When asked to predict stock prices or make buy/hold/sell recommendations, LLMs exhibit systematic preferences of their results, including latent biases related to firm size and industry exposure (Choi et al., 2025). For investors using AI as input for trading decisions, this presents a subtle but real risk: portfolios may inadvertently bias toward what’s already overcrowded.

In fact, Aghbabali, Chung, and Huh (2025) find evidence that this crowding out is already underway: Following the discharge of ChatGPT, investors are increasingly trading in the identical direction, suggesting that AI-assisted interpretation is driving belief convergence reasonably than view diversity.

Four biases that could be hiding in your AI tool

Other recent work documents systematic biases in LLM-based financial evaluation, including foreign biases in cross-border forecasts (Cao, Wang, and Xiang, 2025) and sector and size biases in investment recommendations (Choi, Lopez-Lira, and Lee, 2025). Building on this emerging literature, 4 potential channels are particularly relevant for investment professionals:

1. Size bias: Large firms receive more analyst coverage and media attention, so LLMs have more textual details about them, which may result in more reliable and infrequently optimistic forecasts. In contrast, smaller firms could also be treated conservatively just because there may be less information within the training data.

2. Industry Orientation: Technology and financial stocks dominate business news and online discussions. When AI models internalize this optimism, they will systematically assign higher expected returns or more favorable recommendations to those sectors, no matter valuation or cycle risk.

3. Volume Bias: Highly liquid stocks generate more trading commentary, news flow and price discussions. AI models may implicitly prefer these names because they seem more regularly in training data.

4. Attention Bias: Stocks with a robust social media presence or high search activity are inclined to attract a disproportionate amount of investor attention. AI models trained on web content could adopt this hype effect, reinforcing popularity reasonably than fundamentals.

These biases are vital because they will distort each idea generation and risk distribution. When AI tools chubby well-known names, investors may unwittingly reduce diversification and miss under-investigated opportunities.

How this shows up in real investment workflows

Many professionals are already integrating AI into their every day work processes. Models summarize submissions, extract key metrics, compare competitors, and suggest preliminary recommendations. These efficiency gains are helpful. However, if AI consistently highlights large, liquid, or popular stocks, portfolios may begin to lean toward crowded segments without anyone consciously making that call.

Imagine a small industrial company with increasing margins and little analyst coverage. An AI tool trained on sparse online discussions may end in cautious language or weaker recommendations despite improved fundamentals. Meanwhile, a high-profile technology stock with a robust media presence might be presented with continued optimism at the same time as valuation risk increases. Over time, idea pipelines formed by such outcomes can narrow reasonably than expand the range of opportunities.

Related evidence suggests that AI-generated investment advice can increase portfolio concentration and risk by overweighting dominant sectors and popular assets (Winder et al., 2024). What appears efficient on the surface can quietly reinforce herding behavior beneath the surface.

Accuracy is barely half the story

Debates about AI in finance often give attention to whether models can accurately predict prices. But bias brings with it one other concern. Even if the typical forecast accuracy appears reasonable, the errors is probably not evenly distributed across the cross-section of stocks.

If AI systematically underestimates smaller or lesser-known firms, it might continually miss out on potential alpha. Overestimating highly visible firms can result in crowded trades or momentum traps.

The risk isn’t simply that the AI ​​gets some predictions flawed. The risk is that they shall be flawed in a predictable and concentrated way – precisely the form of exposure that skilled investors need to manage.

As AI tools move ever closer to frontline decision-making, this distributional risk becomes increasingly relevant. Screening models that tacitly encode attentional biases can shape portfolio construction long before human judgment intervenes.

What practitioners can do about it

When used thoughtfully, AI tools can significantly improve productivity and the scope of study. The key’s to treat them as input reasonably than authorities. AI is best suited as a start line – generating ideas, organizing information and speeding up routine tasks – while final judgment, evaluation discipline and risk management remain heavily human-driven.

In practice, this implies being attentive not only to what AI produces, but in addition to patterns in its results. When AI-generated ideas consistently cluster around large-cap names, dominant sectors, or highly visible stocks, that clustering itself could also be a signal of embedded bias reasonably than opportunity.

Regular stress testing of AI results by expanding screenings to firms with less coverage, less followed sectors, or segments with less attention might help make sure that efficiency gains don’t come on the expense of diversification or more granular insights.

The real advantage lies not with investment professionals who use AI most aggressively, but with those that understand how their beliefs arise and where they reflect attention reasonably than economic reality.

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here