Friday, March 6, 2026

AI Strategy After the LLM Boom: Preserve Sovereignty, Avoid Capture

It’s time to rethink AI’s presence, deployment and strategy

This week on the UK Parliament’s Artificial Intelligence APPG evidence session, Yann LeCun, Meta’s recently departed chief AI scientist and one in all the fathers of recent AI, presented a technically sound view of the evolving AI risk and opportunity landscape. APPG AI is the all-party parliamentary group on artificial intelligence. This post relies on Yann LeCun’s testimony to the group and includes quotes taken directly from his remarks.

His comments are relevant to investment managers because they address three areas that capital markets often but shouldn’t consider individually: AI capability, AI control and AI economics.

Prevailing AI risks now not give attention to who trains the biggest model or secures essentially the most advanced accelerators. They are increasingly concerned with who controls the interfaces to AI systems, where information flows, and whether the present wave of LLM-centric capital spending will generate acceptable returns.

High AI risk

“That’s the biggest risk I see in the future of AI: the collection of information by a small number of companies through proprietary systems.”

For states, this can be a national security problem. There is a risk of dependency for investment managers and corporations. When research and decision support processes are mediated through a limited variety of proprietary platforms, trust, resilience, data confidentiality and bargaining power weaken over time.

LeCun identified “federated learning” as a partial mitigating measure. In such systems, centralized models don’t require viewing the underlying data for training, but as a substitute depend on exchanged model parameters.

In principle, this enables a resulting model to “…perform as if it had been trained on the entire data set…without the data ever leaving (your domain).”

However, this is just not a simple solution. Federated learning requires a brand new form of setup with trusted orchestration between parties and centralized models in addition to a secure cloud infrastructure at a national or regional level. It reduces data sovereignty risk, but doesn’t eliminate the necessity for sovereign cloud capability, reliable energy supplies, or sustainable capital investments.

AI assistants as a strategic vulnerability

“We cannot afford to have these AI assistants under the proprietary control of a handful of companies in the US or China.”

AI assistants are unlikely to stay easy productivity tools. They will increasingly mediate on a regular basis information flows and shape what users see, ask and judge. LeCun argued that concentration risk at this level is structural in nature:

“We will need a wide variety of AI assistants, for the same reason we need a wide variety of news media.”

The risks lie primarily at the federal government level, but also they are essential for investment professionals. Beyond obvious abuse scenarios, there’s a risk that narrowing information perspectives through a small variety of assistants will increase behavioral distortions and homogenize the evaluation.

Edge Compute doesn’t remove cloud dependency

“Some run on your local device, but most of it has to run somewhere in the cloud.”

From a sovereignty perspective, edge deployment can reduce some workloads, but it surely doesn’t eliminate the ownership or control issues:

“There is a real question here about jurisdiction, privacy and security.”

LLM ability is overrated

“We are deceived into believing that these systems are intelligent because they are good at language.”

The problem is just not that giant language models are useless. Volatility is commonly confused with reasoning or understanding the world – a vital distinction for agent systems that depend on LLMs for planning and execution.

“Language is simple. The real world is chaotic, noisy, high-dimensional, continuous.”

For investors, this raises a well-known query: How much is current AI investment in constructing persistent intelligence and the way much is user experience optimization around statistical pattern matching?

World models and the post-LLM horizon

“Despite the achievements of current language-oriented systems, we are still very far from the kind of intelligence we see in animals or humans.”

LeCun’s concept of world models focuses on learning how the world behaves, not only how language correlates. While LLMs are optimized for predicting the subsequent token, world models aim to predict consequences. This distinction distinguishes surface-level pattern replication from models which can be more causally based.

This doesn’t mean that today’s architectures will disappear, but moderately that they is probably not those that ultimately provide sustainable productivity gains or investment advantages.

Meta, risk of open platforms

LeCun acknowledged that Meta’s position has modified:

“Meta used to be a leader in providing open source systems.”

“Last year we lost ground.”

This reflects broader industry dynamics moderately than a straightforward strategic reversal. While Meta continues to release models under open-weight licenses, competitive pressures and the rapid proliferation of model architectures – highlighted by the emergence of Chinese research groups comparable to DeepSeek – have reduced the shelf lifetime of purely architectural benefits.

LeCun’s concerns were framed not as criticism of a single company, but as a systemic risk:

“Neither the US nor China should dominate this space.”

As the worth shifts from model weights to distribution, platforms increasingly favor proprietary systems. From a sovereignty and dependency perspective, this trend deserves the eye of investors and policymakers alike.

Agentic AI: Ahead of Governance Maturity

“Today, agent systems have no way of predicting the consequences of their actions before they act.”

“This is a very bad way to design systems.”

For investment managers experimenting with agents, this can be a clear warning. If used prematurely, there’s a risk that hallucinations will spread through decision chains and poorly regulated motion loops. While technological advancements are accelerating, the governance frameworks for agentic AI remain underdeveloped in comparison with skilled standards in regulated investment environments.

Regulation: applications, not research

“Don’t regulate research and development.”

“They ensure that Big Tech takes over the regulatory measures.”

LeCun argued that poorly targeted regulation entrenches incumbents and increases barriers to entry. Instead, the main target of regulation ought to be on the outcomes of delivery:

“Anytime AI is used and can have a major impact on people’s rights, there needs to be regulation.”

Conclusion: Preserve sovereignty, avoid capture

The immediate AI risk is just not general intelligence running uncontrolled. This involves the capture of knowledge and economic value inside proprietary, cross-border systems. Sovereignty is vital at each government and company levels, and which means that when deploying LLMs in your organization, security is paramount. A low-confidence approach.

LeCun’s statement draws attention away from the foremost model releases and toward the one who controls data, interfaces, and computing power. At the identical time, many current AI investments remain anchored in an LLM-centric paradigm, despite the fact that the subsequent phase of AI will likely look significantly different. This combination creates a well-known environment for investors: an increased risk of misallocation of capital.

In times of rapid technological change, the best danger is just not what the technology can achieve, but where dependency and rents ultimately arise.

Latest news
Related news