Legal news

Everything you need to know about the Artificial Intelligence Regulation (RIA)

Eloi Font
By:
insight featured image
Regulation (EU) 2024/1689 of 13 June 2024 establishing harmonised standards for artificial intelligence (hereinafter referred to as "RIA") will be almost fully applicable from August 2026.
Contents

The Harmonised Standards on Artificial Intelligence (RIA) set the new European standard for the development and use of AI, combining innovation and trust. Through a risk-based approach, they establish a clear framework that drives the responsible adoption of these technologies, strengthens legal certainty for companies and ensures the protection of fundamental rights in an increasingly digital environment. 

 

Who does it apply to?

The RIA has a broad scope of application. It applies to suppliers, deployants, importers, distributors and authorised representatives of AI systems, whether they are established in the European Union (EU) or when, even outside the EU, they place systems on the EU market or their results are used there.

 

Which AI systems are required?

The RIA articulates its obligations according to the level of risk of AI, concentrating the most intense demands on high-risk AI systems.

Systems considered to be high-risk include, but are not limited to, AI systems used in the following areas:

  • Vocational education and training, such as systems that determine access to education, assess student performance, or monitor and detect prohibited behaviors during tests or exams.
  • Employment, worker management and access to self-employment, such as recruitment tools, performance appraisal, task assignment or employee tracking systems.
  • Essential private and public services, such as systems that condition access to or enjoyment of certain essential private services or essential public services and benefits, such as credit, insurance, housing, social benefits or certain health and emergency services.

 

What obligations does the RIA impose with respect to high-risk systems?

High-risk AI systems are subject to the strictest obligations. These include risk and quality management requirements, technical documentation, record keeping, human oversight, transparency, cybersecurity, and conformity assessment prior to placing on the market or putting them into service, where required. In addition, those deploying these systems must also use them in accordance with the supplier's instructions, ensure effective human supervision and comply with certain additional control and reporting obligations.

The application of the obligations provided for in the RIA is staggered. From February 2025, bans on certain AI practices and AI literacy requirements  have been enforceable.From August 2026, the regime provided for high-risk systems in Annex III will be applicable in general. In the case of high-risk schemes linked to products regulated by EU sectoral regulations and included in Annex I, this scheme will be required from August 2027.

 

What happens if these obligations are not met?

The RIA incorporates a very relevant sanctioning regime for companies. The highest fines are reserved for cases where prohibited AI practices are used, and can amount to up to €35 million or 7% of annual global turnover. In addition, the RIA provides for penalties of up to €15 million or 3% for non-compliance with certain essential obligations, especially those linked to high-risk systems and the responsibilities of the different operators in the value chain.

Finally, providing incorrect, incomplete or misleading information to the authorities can result in fines of up to €7.5 million or 1% of annual global turnover.

At the domestic level, account should be taken of the Draft Law on the proper use and governance of artificial intelligence, currently in the pre-legislative stage, the aim of which is to adapt Spanish law to the RIA and which, among other things, provides for the designation of the competent national authorities in the field of market supervision and surveillance.  as well as the development of the sanctioning regime and the sanctions scheme applicable in Spain.

 

How do we propose to respond to the challenges posed by the RIA?

First, carry out an audit or "discovery" phase to understand the AI that an organization is using and/or wants to implement, and then regulate its use through an Internal Policy and determine the company's role in relation to each AI in order to implement the corresponding obligations. Finally, there may need to be an AI Officer who is responsible for controlling and/or mitigating the impact of AI on the organization.