G
News
May 17, 2023
8 min read

The EU AI Act: What can you expect from the upcoming European regulation of AI?

In light of the widespread and rapid adoption of ChatGPT and other Generative AI models, which have brought new risks, the EU Parliament has accelerated its agenda on AI. The vote that took place on May 11, 2023 represents a significant milestone in the path toward the adoption of the first comprehensive AI regulation.

Demystifying the EU AI Act news
Javier Canales Luna
Demystifying the EU AI Act news
Demystifying the EU AI Act news

📖 A short history of the EU AI Act

Artificial Intelligence is rapidly transforming the world we live in. From smartphones, autonomous cars, and social media, to healthcare management, financial services, and law enforcement, nearly every sector and industry is experiencing deep changes as a result of the adoption of AI. 

In April 2021, the EU published its eagerly-awaited proposal for regulating AI. The so-called EU AI Act (the Act) aims to regulate the uses of AI in order to leverage its multiple benefits and mitigate its risks. Portrayed as the world's first AI legislation, the Act is likely to become a global gold standard in the regulation of this technology, as already occurred with the protection of privacy following the adoption of the General Data Protection Regulation (GDPR) in 2015. 

After several amendments, the European Parliament’s leading parliamentary committees have green-lighted the AI Act in a vote on May 11th, 2023. The latest version of the EU AI Act incorporates several changes, following previous criticism and the new concerns resulting from the fast-paced adoption of ChatGPT and other general-purpose AI tools.

After the agreement at the European Parliament, the Act will reach the last stage of the legislative process, where the delegates of the European Comission, the European Council and the European Parliament will discuss the final wording of the text. Given the complex legislative process in the EU, and the sensitive nature of AI, it will likely take until late 2023 or 2024 for the Act to be approved and legally binding.

🇪🇺 What’s the scope of the EU AI Act?

The Act aims at creating a balanced and proportional legal framework for AI that is suitable to ensure the safe and responsible use of AI, while also promoting innovation and competitiveness in the AI sector. 

To do so, the Act adopts a horizontal approach, meaning all kinds of AI applications across all sectors will have to comply with the same technical and legal requirements. However, in the latest version of the Act, the EU has adopted a tighter definition, aligned with that of the Organisation for Economic Cooperation and Development (OECD). As a result, AI is defined as:

“A machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”

The key element to determine the obligations and penalties for the different types of AI systems will be the level of risk. Regarding the subjects that will be subject to obligations, the Act has a wide scope, covering providers, importers, distributors, and users of AI systems.

Finally, the Act will be applicable to all uses of AI that affect EU citizens, meaning that non-EU organisations can also be subject to the Act if they provide AI systems into the EU. This extraterritorial reach follows the logic behind other EU regulations, such as the GDPR.

Risk-based approach to regulate AI systems

This is the most significant element of the regulation. By focusing on the need to protect the fundamental rights of EU citizens, the Act follows a risk-based approach, which means that certain obligations and restrictions will apply depending on the level of risk arising from the use of AI. 

The Act establishes four types of risks:

  • Unacceptable risk AI systems.  This category includes uses of AI that contravene EU fundamental rights, such as social scoring by governments and mass surveillance systems. The new version of the Act has significantly extended this list, which now includes AI systems for biometric classification, predictive policing or biometric data scraping. These uses of AI will be banned because of the unacceptable risk they create.  
  • High-risk AI systems. These are AI systems that can have a significant impact on people’s health, safety, fundamental rights or the environment. AI systems designed to influence voters in political campaigns and in recommender systems used by big social media platforms are also included in the list. In order to ensure trust and a consistently high level of protection, AI systems under this category have to comply with a series of requirements before they can be deployed on the EU market.
  • Limited risk AI systems. This group includes certain types of AI systems that either interact with humans, detect emotions or determine categories based on biometric data, or produce manipulated content, such as chatbots and systems that produce deep fakes. Yet these systems are permitted, they are subject to transparent obligations.
  • Minimal risk AI systems. All other AI systems that are deemed to have minimal or no risk are permitted in the EU market without restrictions, although the adoption of voluntary codes of conduct is recommended.

Source: European Commission

Requirements for High-Risk AI Systems

The Act highlights explicitly high-risk applications and prescribes extensive disclosure and rigorous controls to ensure AI systems are robust and trustworthy.

High-risk AI systems are defined as those that are used in sectors where the failure or misuse of the AI system could have serious negative consequences for individuals, society, or the environment. They are defined both by general characteristics and specifically targeted applications. 

Two groups of AI systems fall into the high-risk category:

  • AI systems used as safety components of regulated products. AI systems of this group, such as medical devices, vehicles, or machinery are subject to third-party assessment under the relevant sectoral legislation.
  • Specific AI systems (stand-alone) in certain areas. The list of areas is in Annex III of the Act. Some of the areas are education, employment, access to private and public essential services (e.g. tools to evaluate the eligibility for public assistance benefits and services, assess creditworthiness, or establish priority in the dispatching of emergency first response services), and law enforcement. Annex III has been significantly amended in the last version of the Act to make the wording more precise.

High-risk AI systems must undergo a conformity assessment (CA) before being deployed on the market. This entails establishing a quality management system suitable for meeting the following requirements:

  • Data and data governance. Appropriate data governance and data management techniques must be applied to ensure that data, whether training, validation or testing data, is relevant, representative, complete and error-free. 
  • Risk management. Providers have to set up a risk management system designed to test the high-risk AI system against potential risks, as well as evaluate the risks and adopt risk management measures (see more in the next section).
  • Record keeping. It means setting up automatic logging capabilities. Following increasing concern about the environmental footprint of AI, the latest version of the Act has incorporated the obligation to keep records of energy consumption, resource use, and the environmental impact of the high-risk AI system during all phases of the system’s life cycle.
  • Robustness, accuracy and cybersecurity. Providers need to ensure that the AI system is designed to achieve an appropriate level of accuracy, robustness and cybersecurity throughout the lifecycle.
  • Transparency and Information. It’s mandatory to ensure an appropriate degree of transparency and provide users with information on the capabilities and limitations of the AI system.
  • Technical documentation. Providers have to draw up technical documentation and update it on a regular basis. 
  • Human oversight. Providers need to integrate human interface tools that allow for identifying anomalies, dysfunctions and unexpected performance, and stop the AI systems, if necessary.

Once providers can demonstrate it has complied with the requirements they providers shall draw up an EU declaration of conformity and register the stand-alone AI systems in an EU database.

Source: European Commission

But the CA is not a one-off exercise. Once deployed on the market, providers have to establish a post-market monitoring system, which aims to evaluate the continuous compliance of AI systems with the AIA requirements for high-risk AI systems. Equally, they need to set up an incident reporting system, designed to communicate and record serious events and malfunctioning leading to violation of fundamental rights. 

During the AI system’s life cycle, public authorities and authorised bodies can carry out inspections to check compliance with the requirements.

Finally, if the high-risk AI system is substantially modified during its lifecycle, or its CA has expired, it will have to undergo a new CA.

The following diagram shows the compliance and enforcement system for high-risk AI systems.

Source: European Commission

⚖ Expanding the EU AI Act by strengthening controls on Generative AI models

The scope of the first draft of the EU AI Act didn’t include AI systems without a specific purpose. However, the generative AI boom following the launch of ChatGPT and another system based on large language models, has put EU legislators under pressure to include these AI systems in the Act. 

Instead of categorising these models according to the risk-based tiers –arguably, these systems could fall under the high-risk tier–, the EU legislators have adopted a different approach. In the latest version of the Act, there’s a new article 28b with specific obligations for providers of a foundation model. 

According to the wording, providers of foundation models will need to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. The technical requirements include risk management, technical documentation, data governance, and a high level of robustness, to be controlled by independent experts.

Generative foundation models, like ChatGPT, will be subject to more stringent controls. Particularly, they will need to comply with transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and providing summaries of data used for training that fall under copyright law.

Risk management and testing AI models: the role of Giskard 🐢

Providers of high-risk AI systems, as well as foundation and general purpose models, need to set up a risk management system capable of identifying known and foreseeable risks associated with high-risk AI systems, evaluating the risks and adopting risk management measures. The main goal of these measures is to eliminate or reduce risks as far as possible, and, where the risks cannot be eliminated completely, adopt mitigation and control measures. 

The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, starting at the development process, prior to placing them on the market or putting them into service. 

A core element in the risk management system is testing. In order to find the most appropriate risk management measures, providers need to test the AI systems with state-of-the-art methods, comprising preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the AI system. 

While the exhaustivity of the requirements will vary depending on the nature of the AI system and the size of the provider, testing will be one of the cornerstones of the proposed regulatory framework, and every AI provider will need to perform tests in order to comply with the requirements of the EU AI Act.

Here is where Giskard enters the scene. Giskard is an open-source, collaborative software that helps AI developers and providers ensure the safety of their AI systems, eliminate risks of AI biases and ensure robust, reliable and ethical AI models. 

With ready-made, extensible test suites, backed by state-of-the-art backed research, Giskard users can seamlessly automate and integrate testing protocols throughout the lifecycle of AI systems, thereby ensuring they meet safety requirements in AI frameworks like the EU AI Act. If you want to know more about how Giskard works, check our website.

Harmonised Standards and Best Practices

In addition to the requirements enumerated in the previous section, the Act also established a system of certification for high-risk AI systems. Under this scheme, high-risk AI systems and foundational models enjoy a presumption of compliance with the requirements if they are in conformity with the relevant harmonised standards.

A harmonised standard is a European standard developed by a recognised European Standards Organisation, following a request from the European Commission. Providers of high-risk AI systems using harmonised standards benefit from a presumption of conformity to the Act, resulting in simplified and less costly conformity assessment procedures.

While the Act mostly focuses on high-risk AI systems, it also prescribes transparency and voluntary conduct for lower-risk applications, with a view to raising overall excellence and trust in AI. Providers of non-high-risk AI systems may create and implement the codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Those codes may also include voluntary commitments related, for example, to environmental sustainability or accessibility for persons with disability.

Governance of AI systems

The AI Act follows a clear chain of responsibility across national and supranational entities. 

At the EU level, the Act established a European Artificial Intelligence Board, composed of representatives from the Member States and the Commission, as well as the European Data Protection Supervisor. The Board will facilitate a smooth, effective and harmonised implementation of this regulation by contributing to the effective cooperation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. It will also collect and share best practices among the Member States.

At the national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the Act. 

National competent authorities, also known as notifying authorities, provide and execute processes for the assessment, designation and notification of conformity assessment bodies and their monitoring. 

Notified bodies are designated conformity assessment bodies in charge of performing conformity assessment, testing, certification and inspection.

Finally, market surveillance authorities will be created to control the market and investigate compliance with the obligations and requirements for all high-risk AI systems already placed on the market. 

Source: Deloitte

💶 Sanctions under the EU AI Act 

To ensure compliance and the proper implementation of the regulatory framework, the Act established a system of heavy penalties against infringements. 

The penalties under the Act shall be effective, dissuasive, and proportionate. This means that factors, such as the type and severity of the offence, and the profile and conduct of the offender, will be assessed to determine the number of fines. Equally, the Act emphasises the proportionality principle for SMEs and start-ups, which will get lower penalties than big companies. 

It’s worth mentioning the way fines are calculated. With a great resemblance with the system of fines under the GDPR, authorities imposing penalties can choose between a fixed sum or a percentage of the total worldwide annual turnover of the offender. This design choice underpins the broad territorial reach and is intended to deter large multinational companies with subsidiaries, offices and employees in the EU and beyond, from infringing the Act.

The latest version of the EU AI Act establishes a three-level sanction structure, which includes different fines depending on the severity of the infringement. The three proposed groups of infringements are the following:

  • Non-compliance with prohibitions of use of certain AI systems. Infringement in this group will lead to administrative fines of up to 40,000,000 EUR or up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is high. 
  • Infringements of obligations. Infringement in this group will result in administrative fines of up to 20,000,000 EUR or up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is high.
  • Incorrect, incomplete or misleading information. These infringements will result in a fine of up to 10,000,000 EUR or up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is high.

🧐 Conclusion: the future of responsible AI in Europe

Although some last-minute amendments are expectable, the approval of the EU AI Act will be a turning point in the future development of AI in Europe and beyond, provided that the Act becomes a global gold standard for AI regulation, as already occurred in the field of data protection with the GDPR.

Under the upcoming regulatory framework, AI providers will have to establish a comprehensive and reliable risk management system to ensure the safe and responsible use of their AI systems. Equally, they will have to implement a post-market monitoring system to check the proper performance of AI systems during the entire lifecycle.  

Against this background, Quality Assurance platforms for AI systems like Giskard will be vital in ensuring compliance with the Act. Designed as a user-friendly interface for quality testing, Giskard allows you to evaluate machine learning models collaboratively, test AI systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Have a look at our product and get ready to be fully compliant with the EU AI Act.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

The EU AI Act: What can you expect from the upcoming European regulation of AI?

In light of the widespread and rapid adoption of ChatGPT and other Generative AI models, which have brought new risks, the EU Parliament has accelerated its agenda on AI. The vote that took place on May 11, 2023 represents a significant milestone in the path toward the adoption of the first comprehensive AI regulation.

📖 A short history of the EU AI Act

Artificial Intelligence is rapidly transforming the world we live in. From smartphones, autonomous cars, and social media, to healthcare management, financial services, and law enforcement, nearly every sector and industry is experiencing deep changes as a result of the adoption of AI. 

In April 2021, the EU published its eagerly-awaited proposal for regulating AI. The so-called EU AI Act (the Act) aims to regulate the uses of AI in order to leverage its multiple benefits and mitigate its risks. Portrayed as the world's first AI legislation, the Act is likely to become a global gold standard in the regulation of this technology, as already occurred with the protection of privacy following the adoption of the General Data Protection Regulation (GDPR) in 2015. 

After several amendments, the European Parliament’s leading parliamentary committees have green-lighted the AI Act in a vote on May 11th, 2023. The latest version of the EU AI Act incorporates several changes, following previous criticism and the new concerns resulting from the fast-paced adoption of ChatGPT and other general-purpose AI tools.

After the agreement at the European Parliament, the Act will reach the last stage of the legislative process, where the delegates of the European Comission, the European Council and the European Parliament will discuss the final wording of the text. Given the complex legislative process in the EU, and the sensitive nature of AI, it will likely take until late 2023 or 2024 for the Act to be approved and legally binding.

🇪🇺 What’s the scope of the EU AI Act?

The Act aims at creating a balanced and proportional legal framework for AI that is suitable to ensure the safe and responsible use of AI, while also promoting innovation and competitiveness in the AI sector. 

To do so, the Act adopts a horizontal approach, meaning all kinds of AI applications across all sectors will have to comply with the same technical and legal requirements. However, in the latest version of the Act, the EU has adopted a tighter definition, aligned with that of the Organisation for Economic Cooperation and Development (OECD). As a result, AI is defined as:

“A machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”

The key element to determine the obligations and penalties for the different types of AI systems will be the level of risk. Regarding the subjects that will be subject to obligations, the Act has a wide scope, covering providers, importers, distributors, and users of AI systems.

Finally, the Act will be applicable to all uses of AI that affect EU citizens, meaning that non-EU organisations can also be subject to the Act if they provide AI systems into the EU. This extraterritorial reach follows the logic behind other EU regulations, such as the GDPR.

Risk-based approach to regulate AI systems

This is the most significant element of the regulation. By focusing on the need to protect the fundamental rights of EU citizens, the Act follows a risk-based approach, which means that certain obligations and restrictions will apply depending on the level of risk arising from the use of AI. 

The Act establishes four types of risks:

  • Unacceptable risk AI systems.  This category includes uses of AI that contravene EU fundamental rights, such as social scoring by governments and mass surveillance systems. The new version of the Act has significantly extended this list, which now includes AI systems for biometric classification, predictive policing or biometric data scraping. These uses of AI will be banned because of the unacceptable risk they create.  
  • High-risk AI systems. These are AI systems that can have a significant impact on people’s health, safety, fundamental rights or the environment. AI systems designed to influence voters in political campaigns and in recommender systems used by big social media platforms are also included in the list. In order to ensure trust and a consistently high level of protection, AI systems under this category have to comply with a series of requirements before they can be deployed on the EU market.
  • Limited risk AI systems. This group includes certain types of AI systems that either interact with humans, detect emotions or determine categories based on biometric data, or produce manipulated content, such as chatbots and systems that produce deep fakes. Yet these systems are permitted, they are subject to transparent obligations.
  • Minimal risk AI systems. All other AI systems that are deemed to have minimal or no risk are permitted in the EU market without restrictions, although the adoption of voluntary codes of conduct is recommended.

Source: European Commission

Requirements for High-Risk AI Systems

The Act highlights explicitly high-risk applications and prescribes extensive disclosure and rigorous controls to ensure AI systems are robust and trustworthy.

High-risk AI systems are defined as those that are used in sectors where the failure or misuse of the AI system could have serious negative consequences for individuals, society, or the environment. They are defined both by general characteristics and specifically targeted applications. 

Two groups of AI systems fall into the high-risk category:

  • AI systems used as safety components of regulated products. AI systems of this group, such as medical devices, vehicles, or machinery are subject to third-party assessment under the relevant sectoral legislation.
  • Specific AI systems (stand-alone) in certain areas. The list of areas is in Annex III of the Act. Some of the areas are education, employment, access to private and public essential services (e.g. tools to evaluate the eligibility for public assistance benefits and services, assess creditworthiness, or establish priority in the dispatching of emergency first response services), and law enforcement. Annex III has been significantly amended in the last version of the Act to make the wording more precise.

High-risk AI systems must undergo a conformity assessment (CA) before being deployed on the market. This entails establishing a quality management system suitable for meeting the following requirements:

  • Data and data governance. Appropriate data governance and data management techniques must be applied to ensure that data, whether training, validation or testing data, is relevant, representative, complete and error-free. 
  • Risk management. Providers have to set up a risk management system designed to test the high-risk AI system against potential risks, as well as evaluate the risks and adopt risk management measures (see more in the next section).
  • Record keeping. It means setting up automatic logging capabilities. Following increasing concern about the environmental footprint of AI, the latest version of the Act has incorporated the obligation to keep records of energy consumption, resource use, and the environmental impact of the high-risk AI system during all phases of the system’s life cycle.
  • Robustness, accuracy and cybersecurity. Providers need to ensure that the AI system is designed to achieve an appropriate level of accuracy, robustness and cybersecurity throughout the lifecycle.
  • Transparency and Information. It’s mandatory to ensure an appropriate degree of transparency and provide users with information on the capabilities and limitations of the AI system.
  • Technical documentation. Providers have to draw up technical documentation and update it on a regular basis. 
  • Human oversight. Providers need to integrate human interface tools that allow for identifying anomalies, dysfunctions and unexpected performance, and stop the AI systems, if necessary.

Once providers can demonstrate it has complied with the requirements they providers shall draw up an EU declaration of conformity and register the stand-alone AI systems in an EU database.

Source: European Commission

But the CA is not a one-off exercise. Once deployed on the market, providers have to establish a post-market monitoring system, which aims to evaluate the continuous compliance of AI systems with the AIA requirements for high-risk AI systems. Equally, they need to set up an incident reporting system, designed to communicate and record serious events and malfunctioning leading to violation of fundamental rights. 

During the AI system’s life cycle, public authorities and authorised bodies can carry out inspections to check compliance with the requirements.

Finally, if the high-risk AI system is substantially modified during its lifecycle, or its CA has expired, it will have to undergo a new CA.

The following diagram shows the compliance and enforcement system for high-risk AI systems.

Source: European Commission

⚖ Expanding the EU AI Act by strengthening controls on Generative AI models

The scope of the first draft of the EU AI Act didn’t include AI systems without a specific purpose. However, the generative AI boom following the launch of ChatGPT and another system based on large language models, has put EU legislators under pressure to include these AI systems in the Act. 

Instead of categorising these models according to the risk-based tiers –arguably, these systems could fall under the high-risk tier–, the EU legislators have adopted a different approach. In the latest version of the Act, there’s a new article 28b with specific obligations for providers of a foundation model. 

According to the wording, providers of foundation models will need to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. The technical requirements include risk management, technical documentation, data governance, and a high level of robustness, to be controlled by independent experts.

Generative foundation models, like ChatGPT, will be subject to more stringent controls. Particularly, they will need to comply with transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and providing summaries of data used for training that fall under copyright law.

Risk management and testing AI models: the role of Giskard 🐢

Providers of high-risk AI systems, as well as foundation and general purpose models, need to set up a risk management system capable of identifying known and foreseeable risks associated with high-risk AI systems, evaluating the risks and adopting risk management measures. The main goal of these measures is to eliminate or reduce risks as far as possible, and, where the risks cannot be eliminated completely, adopt mitigation and control measures. 

The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, starting at the development process, prior to placing them on the market or putting them into service. 

A core element in the risk management system is testing. In order to find the most appropriate risk management measures, providers need to test the AI systems with state-of-the-art methods, comprising preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the AI system. 

While the exhaustivity of the requirements will vary depending on the nature of the AI system and the size of the provider, testing will be one of the cornerstones of the proposed regulatory framework, and every AI provider will need to perform tests in order to comply with the requirements of the EU AI Act.

Here is where Giskard enters the scene. Giskard is an open-source, collaborative software that helps AI developers and providers ensure the safety of their AI systems, eliminate risks of AI biases and ensure robust, reliable and ethical AI models. 

With ready-made, extensible test suites, backed by state-of-the-art backed research, Giskard users can seamlessly automate and integrate testing protocols throughout the lifecycle of AI systems, thereby ensuring they meet safety requirements in AI frameworks like the EU AI Act. If you want to know more about how Giskard works, check our website.

Harmonised Standards and Best Practices

In addition to the requirements enumerated in the previous section, the Act also established a system of certification for high-risk AI systems. Under this scheme, high-risk AI systems and foundational models enjoy a presumption of compliance with the requirements if they are in conformity with the relevant harmonised standards.

A harmonised standard is a European standard developed by a recognised European Standards Organisation, following a request from the European Commission. Providers of high-risk AI systems using harmonised standards benefit from a presumption of conformity to the Act, resulting in simplified and less costly conformity assessment procedures.

While the Act mostly focuses on high-risk AI systems, it also prescribes transparency and voluntary conduct for lower-risk applications, with a view to raising overall excellence and trust in AI. Providers of non-high-risk AI systems may create and implement the codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Those codes may also include voluntary commitments related, for example, to environmental sustainability or accessibility for persons with disability.

Governance of AI systems

The AI Act follows a clear chain of responsibility across national and supranational entities. 

At the EU level, the Act established a European Artificial Intelligence Board, composed of representatives from the Member States and the Commission, as well as the European Data Protection Supervisor. The Board will facilitate a smooth, effective and harmonised implementation of this regulation by contributing to the effective cooperation of the national supervisory authorities and the Commission and providing advice and expertise to the Commission. It will also collect and share best practices among the Member States.

At the national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the Act. 

National competent authorities, also known as notifying authorities, provide and execute processes for the assessment, designation and notification of conformity assessment bodies and their monitoring. 

Notified bodies are designated conformity assessment bodies in charge of performing conformity assessment, testing, certification and inspection.

Finally, market surveillance authorities will be created to control the market and investigate compliance with the obligations and requirements for all high-risk AI systems already placed on the market. 

Source: Deloitte

💶 Sanctions under the EU AI Act 

To ensure compliance and the proper implementation of the regulatory framework, the Act established a system of heavy penalties against infringements. 

The penalties under the Act shall be effective, dissuasive, and proportionate. This means that factors, such as the type and severity of the offence, and the profile and conduct of the offender, will be assessed to determine the number of fines. Equally, the Act emphasises the proportionality principle for SMEs and start-ups, which will get lower penalties than big companies. 

It’s worth mentioning the way fines are calculated. With a great resemblance with the system of fines under the GDPR, authorities imposing penalties can choose between a fixed sum or a percentage of the total worldwide annual turnover of the offender. This design choice underpins the broad territorial reach and is intended to deter large multinational companies with subsidiaries, offices and employees in the EU and beyond, from infringing the Act.

The latest version of the EU AI Act establishes a three-level sanction structure, which includes different fines depending on the severity of the infringement. The three proposed groups of infringements are the following:

  • Non-compliance with prohibitions of use of certain AI systems. Infringement in this group will lead to administrative fines of up to 40,000,000 EUR or up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is high. 
  • Infringements of obligations. Infringement in this group will result in administrative fines of up to 20,000,000 EUR or up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is high.
  • Incorrect, incomplete or misleading information. These infringements will result in a fine of up to 10,000,000 EUR or up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is high.

🧐 Conclusion: the future of responsible AI in Europe

Although some last-minute amendments are expectable, the approval of the EU AI Act will be a turning point in the future development of AI in Europe and beyond, provided that the Act becomes a global gold standard for AI regulation, as already occurred in the field of data protection with the GDPR.

Under the upcoming regulatory framework, AI providers will have to establish a comprehensive and reliable risk management system to ensure the safe and responsible use of their AI systems. Equally, they will have to implement a post-market monitoring system to check the proper performance of AI systems during the entire lifecycle.  

Against this background, Quality Assurance platforms for AI systems like Giskard will be vital in ensuring compliance with the Act. Designed as a user-friendly interface for quality testing, Giskard allows you to evaluate machine learning models collaboratively, test AI systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Have a look at our product and get ready to be fully compliant with the EU AI Act.

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.