AI Act: A new compliance agenda with major impact on the financial sector

The EU’s AI Act constitutes a new chapter in the history of regulation of financial services enterprises.

 

For the first time, the EU is introducing a comprehensive set of rules for artificial intelligence, which has the potential to become a global standard in the same manner as the GDPR. You may think this is primarily about chatbots, ChatGPT, etc., but the regulation actually has more far-reaching and profound effects. It affects the core of the business models of banks, pension funds, and insurance companies, where AI is already an integral part of the decision-making process, for example with respect to customers, prices, and risks.

Strict rules with considerable implications

 

The AI Regulation has been adopted and will enter into force in phases from 2025, and as early as 2026, many of the requirements for so-called high-risk AI systems will apply. This includes models used for credit assessment, pricing, underwriting, recruitment, or money laundering monitoring.

AI models classified as ‘high risk’ under the rules of the AI Regulation, i.e. most models, will be subject to a number of mandatory obligations, including:

  • Risk management throughout the lifetime of the AI system
  • Data management and technical documentation
  • Registration in the central EU database
  • Human oversight with the possibility of intervention
  • Automated logging requirements
  • Fairness and transparency requirements.

 

The penalties for non-compliance are severe and may be extensive. An organization may incur liability for the use of prohibited AI or breach of the rules relating to high-risk models, and fines may be as high as EUR 35 million, or three to seven percent of global consolidated revenue. This is on a par with, and in some cases higher than, GDPR penalties.

AI no longer just an IT matter

 

Many financial services enterprises are already using AI and machine learning in their processes and decision-making systems, and many are not fully aware of the extent of such use. Platforms such as Earnix, SAS, Databricks, Domino Data Lab, and Actico are common in the financial sector, but are often perceived as technical tools rather than artificial intelligence. The AI Act redefines that notion. The rules are less about the technology itself and more about how these systems affect people. They must protect people’s fundamental rights and address challenges associated with AI to make room for innovation and the circulation of artificial intelligence.

That is why a pricing system that provides two customers with very different offers based on opaque scoring practices is covered by the rules. An underwriting model that rejects applicants based on statistical probabilities is covered. A model for money laundering monitoring that automatically flags a customer as suspicious is covered. And an AI model that filters job applicants is undoubtedly also covered.

It is crucial that the second and third lines of defense understand, navigate, and monitor the system landscape from this new risk perspective to ensure that the organization’s lines of defense are capable of addressing relevant risks. It is almost certain that many of the systems in a system landscape have not previously been assessed and controlled from this regulatory perspective. Traditionally, the control and audit plan has often been focused on systems that provide data to the external financial statements and the company’s financial control environment, but the new AI Act has a more far-reaching impact. For that reason, risk monitoring, compliance controls, and audit plans need to be significantly extended to cover all systems incorporating AI components and machine learning.

 

How do you actually secure compliance?

 

Meeting the requirements of the AI Act is not just about simple compliance or ‘tick the box’ forms. A structured and thorough approach is required. Organizations with high-risk AI systems must be able to document in detail every element of data sources, estimates, and model choices to training methods, monitoring, control, governance, and the avoidance of bias. It is about transparency and control, which must be documented in all three lines of defense.

The supervisory authorities do not expect the technology to be perfect. But they expect organizations to be in control of the systems used, the documentation and that they are aware of their ethical responsibilities. Many organizations are used to documenting policies, processes, and user access rights, but not models, training data, or the fairness of algorithms. As a result, the AI Act is complex and presents significant challenges for many companies.

Companies that are not entirely ready when the rules enter into force are, as a minimum, expected to have in place clear plans for how they will achieve a full overview and compliance within a limited period of time. As always, there is no guarantee of the supervisory authorities understanding or forgiving organizations for not having met the goals!

 

A complex task requiring precision rather than overdesign

 

At Atlab FS and Conformance, we have deep expertise in the financial sector, backed by a technical, legal, control and audit understanding, and insight into the rules. We are familiar with the statutory requirements, models, and the actual challenges facing operations, which makes us capable of finding the right level. Moreover, we have several years’ experience of the design of technical platforms and the provision of services to the second and third lines of defense, which in turn allows us to contribute additional competences and resources to these functions when control and audit plans need to be drafted, and when such plans are to be implemented in the organization.

Compliance requires precision, not excessive implementation. The measures implemented must be adequate and well-documented, but not more extensive than required. Overly extensive, unnecessary, and opaque compliance wastes resources, hinders understanding and may ultimately lead to resistance.

We offer to provide you with an overview, help you strike the right balance and ensure that your documentation creates value for you, is compliant and will pass any inspections. Nothing more. Nothing less.

 

Our services include:

 

  • Mapping and categorization of AI systems
  • Assistance to the second and third lines of defense in the form of analysis of the organization’s maturity level
  • Visualization of gaps from the current situation and until full compliance has been achieved
  • Assistance in the preparation of control and audit plans
  • Bridge building between implementations teams (if any) and the three lines of defense
  • Preparation of documentation and technical descriptions
  • Review of data sets, fairness, and the risk of bias
  • Training of the second and third lines of defense in AI regulation and governance

 

The AI Act may not be deferred. The requirements for high-risk systems will be enforced as early as 2026.