Legal Updates: key legislative and case law developments
Legal newsLegal and case law updates: lease buyback rights, Confidential Binding Offer, bank liability for phishing, and U.S. tariff response measures.

The Harmonised Standards on Artificial Intelligence (RIA) set the new European standard for the development and use of AI, combining innovation and trust. Through a risk-based approach, they establish a clear framework that drives the responsible adoption of these technologies, strengthens legal certainty for companies and ensures the protection of fundamental rights in an increasingly digital environment.
The RIA has a broad scope of application. It applies to suppliers, deployants, importers, distributors and authorised representatives of AI systems, whether they are established in the European Union (EU) or when, even outside the EU, they place systems on the EU market or their results are used there.
The RIA articulates its obligations according to the level of risk of AI, concentrating the most intense demands on high-risk AI systems.
Systems considered to be high-risk include, but are not limited to, AI systems used in the following areas:
High-risk AI systems are subject to the strictest obligations. These include risk and quality management requirements, technical documentation, record keeping, human oversight, transparency, cybersecurity, and conformity assessment prior to placing on the market or putting them into service, where required. In addition, those deploying these systems must also use them in accordance with the supplier's instructions, ensure effective human supervision and comply with certain additional control and reporting obligations.
The application of the obligations provided for in the RIA is staggered. From February 2025, bans on certain AI practices and AI literacy requirements have been enforceable.From August 2026, the regime provided for high-risk systems in Annex III will be applicable in general. In the case of high-risk schemes linked to products regulated by EU sectoral regulations and included in Annex I, this scheme will be required from August 2027.
The RIA incorporates a very relevant sanctioning regime for companies. The highest fines are reserved for cases where prohibited AI practices are used, and can amount to up to €35 million or 7% of annual global turnover. In addition, the RIA provides for penalties of up to €15 million or 3% for non-compliance with certain essential obligations, especially those linked to high-risk systems and the responsibilities of the different operators in the value chain.
Finally, providing incorrect, incomplete or misleading information to the authorities can result in fines of up to €7.5 million or 1% of annual global turnover.
At the domestic level, account should be taken of the Draft Law on the proper use and governance of artificial intelligence, currently in the pre-legislative stage, the aim of which is to adapt Spanish law to the RIA and which, among other things, provides for the designation of the competent national authorities in the field of market supervision and surveillance. as well as the development of the sanctioning regime and the sanctions scheme applicable in Spain.
First, carry out an audit or "discovery" phase to understand the AI that an organization is using and/or wants to implement, and then regulate its use through an Internal Policy and determine the company's role in relation to each AI in order to implement the corresponding obligations. Finally, there may need to be an AI Officer who is responsible for controlling and/or mitigating the impact of AI on the organization.
Legal and case law updates: lease buyback rights, Confidential Binding Offer, bank liability for phishing, and U.S. tariff response measures.
Learn about the key changes in Law 2/2025 regarding permanent disability. Understand reasonable accommodations, deadlines, and how it will impact in companies.