The positive outcome of the latest negotiations at the Council paves the way for the EU AI ACT to become law in the coming months
On 2 February 2024, the Council of the EU (the Council) unanimously approved the compromise text of the European regulation on AI, the so-called EU AI Act. With this decision, the EU has reached another important milestone in its efforts to pass the first comprehensive, legally binding AI law in the world.
The unanimous vote of the Council has palliated the fears that some member states could block the approval of the AI Act. It also sends a strong signal to the European Parliament, which will have a final vote on the compromise text, most likely in April. Following the approval by the Council, chances are high that the AI Act will make it to the end of the legislative process before the next European elections, in June 2024.
In this article, we analyse the main changes incorporated in the compromise text recently approved by the Council. The final version, leaked to the public some days before the vote, is available here.
Let’s get started!
Background of EU AI Regulation: Caution, Drama and a Lot at Stake
The European legislative process is often regarded as a boring, lengthy, and extremely bureaucratic process. Despite the enormous impact of EU law on the lives of European citizens, few people in the old continent know and care about what happens in the European institution in Brussels.
Against this background, the legislative process of the Ai Act can only be classified as a rara avis. Or an odyssey, some may say. Few legislative processes in the history of the EU have sparked more debate, speculation and drama than the AI Act.
Labelled as the world’s first AI regulation, the AI Act has overcome all kinds of obstacles and challenges. That includes the rise of ChatGPT, Google Bard and the many other tools that are fuelling the ongoing generative AI revolution, that didn’t even exist at the time the European Commission published the first draft of the AI Act, back in April 2021.
Following subsequent revisions of the original text and heated negotiations between the EU institutions in the last months in the so-called trilogues (i.e. informal consultations between the Commission, the Council, and the Parliament), the EU finally managed to reach a political agreement on the text early in December 2023. Check out our dedicated post to know the details of the agreement.
But that was not the end of the soap opera. The AI Act still needed to undergo revision at the technical level, and some countries, including France, Germany, and Italy, have been threatening lately to vote against the text on grounds that the regulation could hamper AI innovation. A potential rejection would delay the approval of the law, making it very difficult to adopt it before the upcoming European elections.
All eyes were on the Council, more particularly, its preparatory body, COREPER (Committee of the Permanent Representatives of the Governments of the Member States to the EU). It had scheduled a meeting on 2 February to vote on a new compromise text following the latest rounds of negotiations during technical discussions. However, some days before, on 22 January, an unofficial version of the (presumed) consolidated version of the AI Act was leaked to the public. Shortly, it was confirmed that the leaked text was the one agreed upon by the EU institutions for final adoption.
Finally, on 2 February, the compromise text was approved unanimously by the COREPER. This was possible after the Commission made several concessions to satisfy the blocking minority, including the announcement of an innovation package to support AI startups and SMEs, as well as the creation of the European AI Office, a new body tasked with implementing and enforcing the incoming AI Act.
Following the approval of the COREPER, the Council itself still has to formally sign off on the Act, but this is merely an administrative formality. Then, the ball will go back again to the Parliament for a final vote.
Let’s analyse now the key takeaways from the compromise text.
1. New Criteria and Exemptions for High-Risk AI Systems under the EU AI Act
The horizontal, risk-based approach to classifying AI systems remains in the last compromise text. According to this approach, there are four risk-based categories (banned practices, high-risk, limited risk, and low risk) to classify AI systems, together with a parallel two-tier approach for so-called general-purpose AI models (GPAI), that was agreed upon during the negotiations in early December 2023.
In the compromise text, the list of high-risks AI systems included in Annex III is updated. Particularly, some clarifications have been made, including the expansion to biometric and post-remote biometric identification systems (originally considered banned practices) with certain limitations and safeguards. Equally, some use cases in the areas of healthcare and life insurance are now included in Annex III.
Finally, the new version of the Act also includes some exemptions for AI systems listed in Annex III that do not entail high risk. For example, AI systems intended to perform narrow procedural operations or tasks intended to improve the result of a previously completed human activity won’t need to comply with the requirements of high-risk systems. In such cases, providers of AI systems must document their assessment before that system is placed on the market or put into service to national competent authorities.
2. Obligations for High-Risk AI Systems
The compromise text enlarges and clarifies the regulatory obligations for high-risk AI systems. Among the new requirements:
- Advanced risk management systems. Providers of high-risk AI systems will need to establish an effective and documented risk management system, capable of identifying known and reasonably foreseeable risks associated with their products. Check out our page to discover how Giskard is working to create best-in-class risk management software for AI systems.
- Address bias. Providers of high-risk AI systems will be oblique to identify, detect, prevent and mitigate harmful biases in their models that may result in discrimination or a negative impact on citizens’ fundamental rights.
- AI literacy. According to the new provisions, providers and deployers of high-risk AI systems will need to ensure that their employees have a sufficient level of AI literacy.
- Human oversight responsibility. AI providers will need to appoint a qualified person responsible for the operational oversight of AI systems. The new provision resembles the obligation to appoint Data Protection Officers under the EU Data Protection Regulation.
- Obligations to address AI incidents. In addition to notifying the competent authorities, AI providers will need to conduct internal investigations and identify corrective actions where a serious incident has occurred.
3. Classifying GPAI: Implications Under the European AI Act
Integrating GPAI (also known as foundation models) in the AI Act was one of the central questions in previous negotiations. In December, the EU legislators agreed to create a separate framework to classify GPAI.
According to the new regime, which has been sustainably clarified in the compromise text, all kinds of GPAI will be subject to a set of requirements, including drawing up technical documentation, disclosing relevant information regarding the system’s working to downstream providers and implementing a policy that outlines how the provider will comply with EU copyright laws. The
Additionally, the compromise text creates new obligations for GPAI deemed to pose “systemic risks”. Providers of such systems will be required to perform model evaluations, risk assessments, implement appropriate cybersecurity measures and report serious incidents to the relevant competent authorities.
The Commission will be responsible for maintaining an updated list of such systems, which will be classified as having systemic risks when the cumulative amount of computing power used during training, measured in floating point operations (FLOPs) is greater than 10^25. Interestingly, such a threshold is the same adopted by the White House in its Executive Order on AI to classify large AI models, issued in late October 2023.
Check out our separate article to know more about the plans of the US to regulate AI.
4. Watermarks For AI-Generated Content
As already advanced in the AI Act provisionally agreed in December 2023, the last version of the Act places several obligations on providers and users of AI systems, including GPAI, to enable the detection and tracing of AI-generated content.
How to ensure this is still known, for the Act doesn't provide concrete measures, but the implementation of these obligations will likely require the use of watermarking techniques.
5. Favouring Open-Source AI: Incentives Within the EU AI Law
To compromise text, grant some privileges to open-source GPAI, which are believed to provide significant growth opportunities for the Union economy by contributing to research and innovation. According to the text, these models will be exempt from certain transparency-related requirements imposed on general-purpose AI models, such as the obligation to keep and provide documentation on the functioning of the model. This exemption does not apply to open-source GPAI models that are considered to have a systemic risk.
6. European AI Office: A New Supervisory Authority
One of the conditions for the approval of the compromise text was the creation of a new AI Office that will be integrated into the structure of the Commission. The Office will be tasked with the implementation and enforcement of the AI Act, as well as providing guidance and overseeing the advancements in AI models, particularly GPAI.
7. Updated fines
The penalty system in the Act has been slightly modified. In the latest version, the penalties are as follows:
- Non-compliance with the provisions concerning prohibited AI practices: €35 million or 7% of annual global turnover, whichever is higher.
- Other violations in the AI Act: €15 million or 3% of annual global turnover, whichever is higher.
- Providing incorrect information to regulators: €7.5 million or 1% of annual global turnover, whichever is higher.
- Infringements of providers of GPAI: €15 million or 3% of annual global turnover, whichever is higher.
8. Timeframes for adoption
Once the law entries into forces, the compromise agreement provides for a 24-month moratorium concerning most parts of the Act, with slightly shorter deadlines for some elements, namely 6 months for the prohibition of certain AI systems, and 12 months for provisions concerning GPAImodels, confidentiality and penalties, and a slightly longer deadline of 36 months for the requirements of high-risk AI systems.
Conclusion. Time to Get Ready for the EU AI Act with Giskard!
Following the unanimous vote of the Council, everyone in Brussels is certain the EU AI Act will make it to the end of the legislative process before the European elections in June. Once the Act enters into force, the countdown will start for AI providers to become compliant with the law.
While the Act will provide a grace period for AI providers, there is a lot to take from the Act, for it will create a comprehensive and demanding regulatory framework with considerable penalties in case of infringement.
Luckily, we at Giskard are working hard to assist and help become fully compliant with the upcoming European AI rulebook. Giskard is an open-source, collaborative software that helps AI developers and providers ensure the safety of their AI systems, eliminate risks of AI biases and ensure robust, reliable and ethical AI models. Have a look at our product and get ready to be fully compliant with the upcoming EU AI Act.