Friday, June 13, 2025

The EU law on artificial intelligence and financial services

Is artificial intelligence (AI) currently regulated within the financial services industry? “No” is often the intuitive answer.

But a more in-depth look reveals parts of existing financial regulations that implicitly or explicitly apply to AI – for instance, the treatment of automated decisions in GDPRalgorithmic trading in MiFID IIAlgorithm governance in RTS 6 and lots of provisions of varied cloud regulations.

While a few of these laws are very forward-looking and future-proof – notably GDPR and RTS 6 – they were all written before the recent explosion in AI capabilities and adoption. Therefore, they’re what I call “pre-AI.” Additionally, AI-specific regulations have been discussed for at the very least a couple of years, and various regulatory and industry bodies have issued high-profile white papers and guidance, but no official regulations per se.

But that modified in April 2021, when the European Commission issued its opinion Artificial Intelligence Act (AI Act) Suggestion. The current text applies to all sectors, but is non-binding as a proposal and its final version may differ from the 2021 version. While the law goals for a horizontal and universal structure, specific industries and applications are explicitly listed.

The law takes a risk-based “pyramid” approach to AI regulation. At the highest of the pyramid are prohibited applications of AI, akin to subliminal manipulations akin to deepfakes, exploitation of vulnerable individuals and groups, social credit scoring, real-time biometric identification in public spaces (with certain exceptions for law enforcement purposes), etc. These include high-risk AI systems, that impact fundamental rights, security and well-being, for instance in aviation, critical infrastructure, law enforcement and healthcare. Then there are several kinds of AI applications to which the AI ​​Act imposes certain transparency requirements. This is followed by the unregulated “Everything Else” category, which by default covers more mundane AI solutions akin to chatbots, banking systems, social media and web search.

Although everyone knows the importance of regulating AI in areas which are fundamental to our lives, such regulations can hardly be universal. Fortunately, regulators in Brussels have adopted an umbrella term, Article 69, which inspires providers and users of lower-risk AI systems to voluntarily and pro rata adhere to the identical standards as their counterparts using high-risk systems.

Liability isn’t a part of the AI ​​law, however the European Commission indicates that future initiatives will address liability and complement the law.

Ad tile for artificial intelligence in asset management

The AI ​​law and financial services

The financial services sector occupies a gray area within the law’s list of sensitive sectors. This ought to be clarified in a future draft.

  • The rationale describes financial services as a “high impact” sector, reasonably than a “high risk” sector like aviation or healthcare. Whether that is just an issue of semantics stays unclear.
  • Finance isn’t one in all the high-risk systems in Annexes II and III.
  • Various sections consult with “credit institutions” or banks.
  • Credit scoring is listed as a high risk use case. However, the explanatory text places this within the context of access to essential services akin to housing and electricity, in addition to fundamental rights akin to non-discrimination. Overall, this has more to do with the banned practice of social credit scoring than with financial services per se. However, the ultimate draft of the law should make clear this.

The position of the law on financial services leaves room for interpretation. Currently, financial services would fall under Article 69 by default. The AI ​​Law explicitly pays attention to proportionality, which strengthens the case for applying Article 69 to financial services.

The primary stakeholder functions mentioned within the law are “providers” or providers and “users”. This terminology is consistent with AI-related soft laws published lately, whether or not they are guidelines or best practices. “Operator” is a typical term in AI parlance, and the law provides its own definition that features providers, sellers and all other actors within the AI ​​supply chain. Of course, the AI ​​supply chain in the true world is far more complex: third parties are providers of AI systems to financial corporations, and financial corporations are providers of the identical systems to their customers.

The European Commission estimates the associated fee of complying with the AI ​​law at 6,000 to 7,000 euros for providers, probably once per system, and at 5,000 to eight,000 euros per 12 months for users. Of course, given the variety of those systems, a single set of figures can hardly apply to all industries, so these estimates are of limited value. In fact, they will provide an anchor against which to check the true cost of compliance across sectors. Inevitably, some AI systems require such close monitoring of each the provider and the user that the associated fee is way higher and creates unnecessary dissonance.

T-Shape Teams report tile

Governance and Compliance

The AI ​​Law introduces an in depth, comprehensive and novel governance framework: the proposed European Artificial Intelligence Board would oversee individual national authorities. Each EU member can either designate an existing national body to oversee AI or, as Spain recently decided to do, create a brand new one. In any case, this can be a huge undertaking. AI providers are required to report incidents to their national authority.

The law establishes many regulatory compliance requirements applicable to financial services, including:

  • Ongoing risk management processes
  • Data and data governance requirements
  • Technical documentation and records
  • Transparency and provision of data to users
  • Knowledge and competence
  • Accuracy, robustness and cybersecurity

By introducing an in depth and strict penalty system for violations, the AI ​​Law is in keeping with the GDPR and MiFID II. Depending on the severity of the violation, the penalty could be as much as 6% of the corporate in query’s annual global turnover. For a multinational technology or financial company, that would amount to billions of dollars. Nevertheless, the sanctions of the AI ​​Act are literally in the center between those of the GDPR and MiFID II, where the fines are a maximum of 4% and 10% respectively.

Tile with current issue of the Financial Analysts Journal

What’s next?

Just because the GDPR became the benchmark for data protection regulations, the EU AI law is anticipated to turn into the model for similar AI regulations worldwide.

Because there are not any regulatory precedents to construct on, the AI ​​Act suffers from a certain first-mover drawback. However, there was thorough consultation and its publication sparked vigorous debate in legal and financial circles, which can hopefully be reflected in the ultimate version.

An immediate challenge is the overly broad definition of AI within the law: the definition proposed by the European Commission includes statistical approaches, Bayesian estimates and possibly even Excel calculations. As law firm Clifford Chance commented: “This definition could encompass almost any enterprise software, even if it does not involve any recognizable form of artificial intelligence.

Another challenge is the proposed regulatory framework of the law. A single national regulatory authority would want to cover all sectors. This could create a splintering effect whereby a separate regulator would oversee all features of specific industries, apart from AI-related matters, which might fall under the separate regulator mandated by the AI ​​Act. Such an approach would hardly be optimal.

In AI, there is probably not a one size matches all.

Furthermore, the interpretation of the law at the person industry level is nearly as vital because the wording of the law itself. Either existing financial regulators or newly created and designated AI regulators should provide guidance to the financial services sector on the interpretation and implementation of the law. These interpretations ought to be consistent across all EU member states.

While the AI ​​Law will turn into a legally binding hard law when passed unless Article 69 is significantly modified, its provisions are soft laws or really useful best practices for all industries and applications except those specifically listed. This appears to be a sensible and versatile approach.

AI pioneers in investment management

With the publication of the AI ​​Law, the EU has taken daring steps that no other regulator has taken before. Now we’ve got to attend – and hopefully not for long – to see what regulatory proposals are made in other technologically advanced jurisdictions.

Will they recommend that individual industries adopt EI regulations, that the regulations promote democratic values ​​or increase government control? Could some jurisdictions go for little or no regulation? Will AI regulations coalesce right into a universal set of worldwide rules or will they be “balkanized” by region or industry? Only time can tell. But I consider that AI regulation will probably be helpful for financial services: it’s going to make clear the present regulatory landscape and hopefully help find solutions to a number of the sector’s most pressing challenges.

If you enjoyed this post, do not forget to subscribe.


Photo credit: ©Getty Images / mixmagic


Latest news
Related news