G
News
September 10, 2024
5 min read

Global AI Treaty: EU, UK, US, and Israel sign landmark AI regulation

The Council of Europe has signed the world's first AI treaty marking a significant step towards global AI governance. This Framework Convention on Artificial Intelligence aligns closely with the EU AI Act, adopting a risk-based approach to protect human rights and foster innovation. The treaty impacts businesses by establishing requirements for trustworthy AI, mandating transparency, and emphasizing risk management and compliance.

Council of Europe - AI Treaty [1]
Stanislas Renondin
Council of Europe - AI Treaty [1]
Council of Europe - AI Treaty [1]

The 5th of September marks a significant milestone as the Council of Europe (CoE), which brings together 46 countries, signed its Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. This treaty is the first international agreement on AI. It sets the stage for global AI governance and is designed to complement the EU AI Act, further extending its influence around the world.

The treaty, signed by major economic powers including the EU, the US, the UK and Israel, signals a clear shift toward international AI standards. It sets harmonized requirements for AI risk management and requires AI developers to incorporate stronger safety and oversight measures into their development processes, which are essential in today’s “AI Cambrian Explosion”. For decision-makers, understanding the implications of this treaty is essential. This is not just another regulatory update – it’s a key development that will shape the future of AI innovation globally.

To get  a comprehensive understanding of the implications of this treaty, the article begins by a short history of the Council of Europe and this initiative, followed by a comparison with the EU AI Act, an analysis of key business impacts, and finally, it explains its applicability beyond the European Union and its alignment with global AI standards.

Council of Europe's AI initiative: towards global AI Governance

The CoE, comprising 46 member states, has been a key player in defending human rights, democracy, and the rule of law since its foundation in 1949. In 2019, the CoE established an ad hoc committee on AI to explore the feasibility of a convention addressing future AI threats. This led to the creation of the Committee on Artificial Intelligence (CAI), tasked with developing a legal framework that not only safeguards human rights but also fosters innovation. In 2024, after multiple iterations, the current version of the convention was voted on and finalized.

Parallels with the EU AI Act

The Council of Europe's Framework Convention on Artificial Intelligence shares many similarities with the EU AI Regulation. In both the approach and the definitions used, the two instruments share common objectives and adopt similar frameworks regarding the scope of regulation, targeting the development, design, and application of information processing systems. Both texts aim to protect fundamental rights through cross-industries AI rules that coexist with other legal instruments such as the European product safety framework or software law. 

These similarities reflect a convergence between the EU’s and the Council’s approaches to AI regulation, as the Council of Europe’s framework convention adopts a risk-based approach. Nevertheless, the treaty places fundamental rights at the heart of its framework. While the treaty operates through a risk-based approach, it will be built around seven fundamental principles, marking a shift in methodology compared to the AI Act, where the protection of rights emerged as a result of the risk-based approach.

The alignment between these two frameworks reflects a broader convergence of regulatory strategies in Europe and in the USA, reinforcing a human-centric yet business-friendly approach to AI governance. For businesses, this convergence offers a more predictable regulatory environment, especially for international businesses operating across different jurisdictions. Both frameworks emphasize the importance of risk management, compliance, and accountability, making it essential for companies to integrate these principles into their AI development and deployment processes.

Business impacts of the Treaty: Key considerations for AI Compliance

  • Risk-Based AI Framework: A focus on risk assessment provides businesses with a clear pathway for AI compliance, especially in high-risk sectors like healthcare, finance, and autonomous technologies.
  • Requirements for Trustworthy AI: Core requirements such as transparency, robustness, and safety will guide AI developers in creating systems that are both innovative and legally compliant.
  • Transparency in AI-Generated Content: Companies will need to disclose when content or interactions are AI-driven, fostering trust and maintaining ethical standards in business operations.
  • Evidence-based compliance: Documentation and Accountability: The treaty mandates rigorous documentation and oversight, ensuring businesses are fully accountable for their AI technologies.
  • Regulatory Sandboxes: These controlled environments allow businesses to safely innovate and test new AI applications without violating regulatory standards.
  • Risk Management and Oversight: Firms must establish robust risk management frameworks to identify and mitigate AI-related risks.
  • No fines but potential Bans: Unlike the mechanisms of the EU AI Act, the treaty does not provide for direct sanctions for companies.​ Nevertheless, AI systems deemed incompatible with human rights or democratic values could face restrictions, requiring businesses to stay updated on evolving regulations.

Beyond the EU AI Act: The global reach of the new AI Treaty

In the same way as Convention 108+ on personal data, the text will not be enforceable before the European Court of Human Rights (ECHR), but it can be recognized in each signatory countries through transposition processes, whereby each country adopts domestic laws or regulations to make the treaty’s provisions applicable within its jurisdiction. Nevertheless, the treaty has been signed by a larger number of countries, representing a genuine opportunity to export European principles and the risk-based approach beyond the borders of the European Union.

A tailored application for the public & private sector

Contrary to some initial comments, law enforcement is covered by this treaty. However, defense and national security, which are distinct from law enforcement, are not included. 

As for the private sector, each country will decide whether its private sector is directly subject to the convention’s rules. Each country must still ratify the treaty before it takes effect. In the EU, the treaty will be implemented through the AI Act, which further increases its reach and significance, making this a truly international framework for AI governance.

Compliance through the AI Act in the EU

The treaty was designed not to conflict with the EU AI Act, meaning that its implementation within the European Union will be assessed based on compliance with the AI Act’s regulations. This underscores the growing importance for businesses to align with existing compliance requirements. Ensuring conformity with current rules is essential, as it strengthens a company’s ability to navigate both the treaty and broader EU regulations, reducing legal risks and fostering trust in AI applications across markets.

For international businesses, this means that AI governance principles are likely to extend far beyond Europe, creating a more harmonized global regulatory landscape. That said, each signatory country retains the discretion to decide whether the treaty will apply to the private sector. This uncertainty introduces potential risks for businesses, especially for those professionals who may be forced to focus their offerings solely on public sector clients. This lack of clarity can complicate strategic planning and market positioning in the AI space, as companies await clearer regulations.

Conclusion

The Council of Europe has strategically aligned its convention with the EU AI Act, focusing on a risk-based approach that prioritizes compliance. This sends a clear message to global AI industry players: respect for fundamental principles will be enforced through risk management and evidence-based compliance.

For businesses, while all states will still have to make choices regarding the application of the treaty, the signing by both the EU and the US represents a significant step toward the creation of global AI compliance. Companies that integrate compliance into their AI operations early on will gain a competitive advantage, positioning themselves as leaders in ethical AI development while mitigating the risks of regulatory backlash.

In essence, the framework places compliance at the heart of AI governance, making it an indispensable part of any AI-driven business strategy moving forward. At Giskard, we empower businesses to stay ahead of these regulations through our automated compliance platform. By streamlining and simplifying the compliance process, we help companies meet regulatory requirements more efficiently, saving both time and money. 

You can reach out to us today to discover how we can help your business to get ready for AI compliance and optimize your AI investments.

[1] Source image

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Global AI Treaty: EU, UK, US, and Israel sign landmark AI regulation

The Council of Europe has signed the world's first AI treaty marking a significant step towards global AI governance. This Framework Convention on Artificial Intelligence aligns closely with the EU AI Act, adopting a risk-based approach to protect human rights and foster innovation. The treaty impacts businesses by establishing requirements for trustworthy AI, mandating transparency, and emphasizing risk management and compliance.

The 5th of September marks a significant milestone as the Council of Europe (CoE), which brings together 46 countries, signed its Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. This treaty is the first international agreement on AI. It sets the stage for global AI governance and is designed to complement the EU AI Act, further extending its influence around the world.

The treaty, signed by major economic powers including the EU, the US, the UK and Israel, signals a clear shift toward international AI standards. It sets harmonized requirements for AI risk management and requires AI developers to incorporate stronger safety and oversight measures into their development processes, which are essential in today’s “AI Cambrian Explosion”. For decision-makers, understanding the implications of this treaty is essential. This is not just another regulatory update – it’s a key development that will shape the future of AI innovation globally.

To get  a comprehensive understanding of the implications of this treaty, the article begins by a short history of the Council of Europe and this initiative, followed by a comparison with the EU AI Act, an analysis of key business impacts, and finally, it explains its applicability beyond the European Union and its alignment with global AI standards.

Council of Europe's AI initiative: towards global AI Governance

The CoE, comprising 46 member states, has been a key player in defending human rights, democracy, and the rule of law since its foundation in 1949. In 2019, the CoE established an ad hoc committee on AI to explore the feasibility of a convention addressing future AI threats. This led to the creation of the Committee on Artificial Intelligence (CAI), tasked with developing a legal framework that not only safeguards human rights but also fosters innovation. In 2024, after multiple iterations, the current version of the convention was voted on and finalized.

Parallels with the EU AI Act

The Council of Europe's Framework Convention on Artificial Intelligence shares many similarities with the EU AI Regulation. In both the approach and the definitions used, the two instruments share common objectives and adopt similar frameworks regarding the scope of regulation, targeting the development, design, and application of information processing systems. Both texts aim to protect fundamental rights through cross-industries AI rules that coexist with other legal instruments such as the European product safety framework or software law. 

These similarities reflect a convergence between the EU’s and the Council’s approaches to AI regulation, as the Council of Europe’s framework convention adopts a risk-based approach. Nevertheless, the treaty places fundamental rights at the heart of its framework. While the treaty operates through a risk-based approach, it will be built around seven fundamental principles, marking a shift in methodology compared to the AI Act, where the protection of rights emerged as a result of the risk-based approach.

The alignment between these two frameworks reflects a broader convergence of regulatory strategies in Europe and in the USA, reinforcing a human-centric yet business-friendly approach to AI governance. For businesses, this convergence offers a more predictable regulatory environment, especially for international businesses operating across different jurisdictions. Both frameworks emphasize the importance of risk management, compliance, and accountability, making it essential for companies to integrate these principles into their AI development and deployment processes.

Business impacts of the Treaty: Key considerations for AI Compliance

  • Risk-Based AI Framework: A focus on risk assessment provides businesses with a clear pathway for AI compliance, especially in high-risk sectors like healthcare, finance, and autonomous technologies.
  • Requirements for Trustworthy AI: Core requirements such as transparency, robustness, and safety will guide AI developers in creating systems that are both innovative and legally compliant.
  • Transparency in AI-Generated Content: Companies will need to disclose when content or interactions are AI-driven, fostering trust and maintaining ethical standards in business operations.
  • Evidence-based compliance: Documentation and Accountability: The treaty mandates rigorous documentation and oversight, ensuring businesses are fully accountable for their AI technologies.
  • Regulatory Sandboxes: These controlled environments allow businesses to safely innovate and test new AI applications without violating regulatory standards.
  • Risk Management and Oversight: Firms must establish robust risk management frameworks to identify and mitigate AI-related risks.
  • No fines but potential Bans: Unlike the mechanisms of the EU AI Act, the treaty does not provide for direct sanctions for companies.​ Nevertheless, AI systems deemed incompatible with human rights or democratic values could face restrictions, requiring businesses to stay updated on evolving regulations.

Beyond the EU AI Act: The global reach of the new AI Treaty

In the same way as Convention 108+ on personal data, the text will not be enforceable before the European Court of Human Rights (ECHR), but it can be recognized in each signatory countries through transposition processes, whereby each country adopts domestic laws or regulations to make the treaty’s provisions applicable within its jurisdiction. Nevertheless, the treaty has been signed by a larger number of countries, representing a genuine opportunity to export European principles and the risk-based approach beyond the borders of the European Union.

A tailored application for the public & private sector

Contrary to some initial comments, law enforcement is covered by this treaty. However, defense and national security, which are distinct from law enforcement, are not included. 

As for the private sector, each country will decide whether its private sector is directly subject to the convention’s rules. Each country must still ratify the treaty before it takes effect. In the EU, the treaty will be implemented through the AI Act, which further increases its reach and significance, making this a truly international framework for AI governance.

Compliance through the AI Act in the EU

The treaty was designed not to conflict with the EU AI Act, meaning that its implementation within the European Union will be assessed based on compliance with the AI Act’s regulations. This underscores the growing importance for businesses to align with existing compliance requirements. Ensuring conformity with current rules is essential, as it strengthens a company’s ability to navigate both the treaty and broader EU regulations, reducing legal risks and fostering trust in AI applications across markets.

For international businesses, this means that AI governance principles are likely to extend far beyond Europe, creating a more harmonized global regulatory landscape. That said, each signatory country retains the discretion to decide whether the treaty will apply to the private sector. This uncertainty introduces potential risks for businesses, especially for those professionals who may be forced to focus their offerings solely on public sector clients. This lack of clarity can complicate strategic planning and market positioning in the AI space, as companies await clearer regulations.

Conclusion

The Council of Europe has strategically aligned its convention with the EU AI Act, focusing on a risk-based approach that prioritizes compliance. This sends a clear message to global AI industry players: respect for fundamental principles will be enforced through risk management and evidence-based compliance.

For businesses, while all states will still have to make choices regarding the application of the treaty, the signing by both the EU and the US represents a significant step toward the creation of global AI compliance. Companies that integrate compliance into their AI operations early on will gain a competitive advantage, positioning themselves as leaders in ethical AI development while mitigating the risks of regulatory backlash.

In essence, the framework places compliance at the heart of AI governance, making it an indispensable part of any AI-driven business strategy moving forward. At Giskard, we empower businesses to stay ahead of these regulations through our automated compliance platform. By streamlining and simplifying the compliance process, we help companies meet regulatory requirements more efficiently, saving both time and money. 

You can reach out to us today to discover how we can help your business to get ready for AI compliance and optimize your AI investments.

[1] Source image

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.