G
News
December 7, 2023
5 min read

Biden's Executive Order: The Push to Regulate AI in the US

One year after the launch of ChatGPT, regulators worldwide are still figuring out how to regulate Generative AI. The EU is going through intense debates on how to close the so-called 'EU AI Act' after two years of legislative process. At the same time, only one month ago, the White House surprised everyone with a landmark Executive Order to regulate AI in the US. In this article, I delve into the Executive Order and advance some ideas on how it can impact the whole AI regulatory landscape.

Biden’s Executive Order to Regulate AI
Javier Canales Luna
Biden’s Executive Order to Regulate AI
Biden’s Executive Order to Regulate AI

Biden’s AI Meeting: The US adds another piece to the evolving landscape of AI Regulation

Back in July 2023, AI regulation in the US was considered in its ”early days”. While the European Union prepares for the implementation of its long-awaited EU AI Act –which is currently in the last stage of the legislative process– and other key countries, including China, Canada, and the UK, follow suit, the US is still immersed in debates about the best approach to regulate AI. 

Amid rising concerns over the existential threats of next-generation AI systems, US President Joe Biden made an unexpected move that is likely to revolutionise the AI regulation landscape: the announcement of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. With this ambitious Executive Order (EO), the US is finally showing its cards, sending a clear message on how it plans to lead the global AI race.

As historical allies with similar values and visions, it’s not surprising that the EO and the EU AI Act have many things in common. However, there are also significant differences between the two texts, that cast light on the different strengths and weaknesses of these two players to regulate AI.

This article will take you through the most important takeaways from the recently announced EO. By adopting a comparative law approach, we will also analyze the main differences between the EO and the EU AI Act. Let’s get started.

Background: The Long Way to Regulate AI in the US

Compared to other countries, the US can be considered a laggard in AI regulation. There are several reasons for this seeming paralysis. As we explained in our previous article analysing the global landscape on AI regulation, advancing norms to regulate emerging technologies is a challenging task. Finding the right balance between responsible use and innovation is hard, and there is always the risk that too much regulatory pressure may limit the benefits these technologies can provide. 

This dilemma is particularly acute in the US, where most of the Big Tech is located and self-governance and voluntary regulations have historically played a more important role than in Europe. 

In addition to that, partisan polarisation in the US Congress is adding complexity to the law-making process. Advancing legislation on sensitive topics, including emerging technologies with potential impacts on civil rights, is a difficult undertaking, as illustrated by the continuous failures of the US government to pass a federal privacy law.

Against this backdrop, the timid and partial US actions to regulate AI come with little surprise –despite being a global leader in AI innovation. Before the EO, the country had only managed to advance a Blueprint for a non-binding AI Bill of Rights, a bunch of initiatives at a state level, and a commitment of voluntary safeguards from the major AI developers, including OpenAI, Google, Microsoft, Meta, and Amazon.

Understanding Biden's AI Executive Order: Why and Why Now

Conscious of the disruptive power of AI, especially the many next-generation tools within the rubric of generative AI, like ChatGPT, President Biden surprised the world with the announcement of his sweeping plan to regulate AI.

Biden has advanced his ambitious agenda using his prerogatives as the head of the White House to issue executive orders, that is directives by the president of the United States that manage operations of the federal government. 

There are two compelling reasons for the use of an executive order. The first has to do with feasibility. Aware of the current difficulties in getting Congress involved in the development of a comprehensive, federal AI law, Biden has opted to act solely within the limits of the executive branch’s power. As we will see in the next section, this choice has important implications for the force and scope of the EO, whose actions are mostly limited to the authorities of the federal government, as we will see in more detail later.

The second reason has to do with timing. Executive orders are not subject to procedures for approval, the president can issue them at any time. The EO was announced two days before the UK AI Safety Summit 2023, where key governments and tech companies have gathered to discuss the potential risks of AI. The summit provided a great opportunity for the US to advance its AI agenda –the content of the EO was indeed presented by Vice President Kamala Harris during the event– and catch up with other players in the race to lead AI, including the EU and China. 

The White House Pledge on AI: Leading by Example in AI Regulation

Despite the unusual length of the EO –it comprises over 100 pages–, the measure is far from establishing a regulatory framework for AI in the same fashion as the EU AI Act. That would require enacting new laws and regulations, something that only the Congress –and federal agencies through rulemaking authority delegated by the Congress– can do.

Instead, the EO mainly addresses the use of AI by the federal administration. This is not trivial, though. The federal go vernment is one of the biggest public entities in the US, encompassing the executive departments, the organisations belonging to the Executive Office of the President, and several independent offices, such as NASA and the Environmental Protection Agency. The federal administration thus has powers to act in a wide range of strategic areas, such as defence, commerce, education, energy, justice, and international relations.

The high degree of political leverage that the federal administration enjoys is also present in the economy of the country. As one of the largest customers in the country, the financial muscle of the federal administration alone can mobilise huge resources, with effect on companies across industries and sectors.

Leading by example, the White House wants to consolidate its vision for responsible development and use of AI. Yet most of the mandates are addressed to the bodies of federal administration, the EO de facto points to an entire ecosystem of public, private, academic, and international stakeholders. 

The EO will create a sandbox to test the behaviour of AI providers under certain regulatory constraints. If the experiment works, the practices, principles, and requirements envisioned in the EO could eventually turn into industry standards. At the same time, if AI providers and users become familiar with a certain degree of regulation, the fears of regulatory pressure by the AI industry may decrease. Ultimately, this favourable context could pave the way for Congress to advance a stronger, legally binding framework to address issues like bias, risk management, privacy, and consumer protection.

Seizing Opportunities: A Policy Agenda on Generative AI

One of the main challenges when it comes to regulating emerging technologies like AI is that innovation often runs faster than law-making. An illustrative case is the EU AI Act. When the European Commission published its first draft of the Act in 2021, AI tools like ChatGPT didn’t exist and generative AI was only starting to take pace. 

Following the popularisation of these tools, the first proposal of the Act is hardly prepared to address the risks and opportunities that generative AI provides. The European Parliament later released another version with several amendments to address the corruption of tools like ChatGPT, but until the Act makes it to the end of the legislative process, we will not know how exactly the EU is going to regulate generative AI.

This is not the case with the US, which has had more time to follow the evolution of generative AI and develop an approach to it. In particular, the EO lays down particular requirements for the so-called “dual-use foundation models”, defined as:

“An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.” 

A Vertical, Holistic Approach to AI Regulatory Compliance

The White House's approach to AI is not only timely, but it’s also ambitious in its attempt to regulate AI holistically. 

The EO envisions the creation of a regulatory framework to address the technical aspects of machine learning and deep learning algorithms. Furthermore, the document advances a comprehensive policy agenda that affects not only the AI industry itself, but also the many areas and sectors that will likely be affected by the AI revolution. Finally, the EO also takes into consideration the international and cross-border nature of AI, advancing mandates in areas perceived as levers to boost the US leadership in AI, including diplomacy efforts and measures in immigration to attract AI talent in the country.

In this vein, the EO advances a vertical, sectoral approach to AI, where every branch of the federal government will be responsible for developing guidelines and regulations according to the specific sector. To coordinate the progress of the different federal departments and agencies, the EO creates the White House Artificial Intelligence Council. 

This vertical, sector-based approach contrasts with the horizontal, risk-based approach adopted in the EU AI Act, where AI systems are classified according to their level of risk, and not their sectors. While the level of risk is not a clear defining criterion in the EO, it still plays an important role, as illustrated in the definition of the dual-use foundation models. Hence, it’s very likely that AI systems perceived as less dangerous will have to comply with fewer requirements in future federal regulations

Analyzing the Impact: Sectors Affected by the US AI Regulations

Below you can find an overview of the areas affected by the EO:

  • Cybersecurity. The EO promotes the use of AI technologies to increase cybersecurity. Equally, advances the development of guides and measures to evaluate the risks of AI, particularly generative AI tools, on cybersecurity and critical infrastructure,
  • Privacy. The US still lacks a federal data protection law. Yet this absence considerably limits the capacity of the White House, the EO still casts some light on this domain. In particular, the EO emphasises the adoption by federal agencies of so-called privacy-enhancing technologies.
    Immigration. The White House is going to use its powers and resources to attract and retain AI talent in the country, including easing visa and Green Card application processes
    Competition. The EO aims to create a level playing field for AI that works for companies of all sizes. This translates into support for small companies and potential actions to address uncompetitive or colluding behaviour in the AI industry. 
  • Copyright. One of the main concerns following the popularisation of generative AI tools is the potential massive violation of copyright. The EO aims to provide legal certainty for AI-supported inventors and creators affected by AI tools. Interestingly, the EO aims to develop watermark techniques for AI-generated content.
  • Labour. The EO takes into consideration the concerns of workers who may be affected by the disruptive force of AI. In particular, the EO advances the creation of training programmes and other measures for workers facing labour disputes and increases worker protections.
  • Equity and Civil Rights. To protect minority groups from potential biases and discrimination by AI systems, the EO outlines measures to ensure equity and protect civil rights.
  • Housing. Closely related to the previous area, the EO aims to establish safeguards to ensure fair access to housing.
  • Health. The EO also sets an agenda to mitigate the risks of AI in healthcare while leveraging its benefits 
  • Transportation. The same goes for passengers. The aim is to ensure the correct and safe integration of AI in transport.
  • Education. In addition to investing more resources to develop AI skills in educational centres (with an important role of universities), the EO also aims at ensuring responsible and nondiscriminatory uses of AI in education. programmes
  • International Relations. The EO aims for the progressive development of common guidelines, practices, and frameworks among key allies abroad, with a particular focus on Western countries. 

Towards an AI Regulatory Framework in the US

The core of the EO focuses on AI safety and security. AI providers regulated by or willing to do business with the federal government will have to comply with some requirements to ensure safe deployment and responsible use of AI. 

The EO pays special attention to powerful dual-use foundation models like ChatGPT. Providers of these tools will have to share their safety test results and other critical information with the US government before reaching the market. With such requirements, the EO seems to mimic the ex-ante evaluation that high-risk AI systems will have to follow.

The details on the technical requirements will be developed by the National Institute of Standards and Technology (NIST), which will play a critical role in developing standard tools and red-teaming tests. With these actions, the EO envisions the creation of a regulatory framework very much in the fashion of the EU AI Act. While nothing is already formal –the NIST has 270 days to fulfil the mandate–, collaboration with Europe is very likely to advance common standards. 

The Critical Question: Is Biden's New Executive Order on AI Enough?

The EO represents a step forward in the US efforts to regulate AI and lead the AI race. Yet there are still many uncertainties surrounding the EO. 

First of all, it’s still uncertain whether the federal administration will be able to meet the expectations of the White House. To lead by example, it’s important to have the necessary skills and experience. But AI, especially generative AI, is a very recent topic, It’s also a rapidly evolving technology, which makes it difficult to catch up with the latest developments. Despite the efforts envisioned in the EO to prepare the federal administration for the AI revolution, this process will take time, money, and political willingness. 

On the other hand, although the EO has more teeth than the previous voluntary commitments by major AI companies, many of the provisions in the text aren’t legally binding, meaning that the government doesn’t have the power to enforce them. This will require future regulation. While the White House wants federal agencies to play a critical role in advancing sectoral AI regulation, no regulation will be possible without Congress. Here is where President Biden will have to work hard to get the support of the Republicans.

Conclusion: The future of AI Regulation

Congratulations on making it to the end of the article. The EO represents a turning point in the US strategy to lead AI. It advances an ambitious policy agenda to address AI holistically. One year after the release of ChatGPT, the White House has had the time to track the evolution of generative AI. This translates into a better perspective on the benefits and risks of AI, which has allowed the White House to advance a multidimensional programme with implications in nearly every aspect of the US economy. 

It’s still early to know whether the White House will be successful. The EO is just the first step in the development of a regulatory, legally binding framework for AI. Ahead, the federal agencies will have to propose sectoral regulation, following a law-making process that will require the support of the Republicans, given the current equilibrium of forces in Congress.

If the vision of the White House is to succeed, the US will deploy a regulatory framework where the technical aspects of AI will be central. Here is where AI quality tools like Giskard join the scene. Designed as a developer-oriented solution for quality testing, Giskard aims to help AI providers become fully compliant with upcoming AI regulations. 

Giskard allows you to evaluate AI models collaboratively, test your systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Try our open-source product and get ready for the age of AI regulation.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Biden's Executive Order: The Push to Regulate AI in the US

One year after the launch of ChatGPT, regulators worldwide are still figuring out how to regulate Generative AI. The EU is going through intense debates on how to close the so-called 'EU AI Act' after two years of legislative process. At the same time, only one month ago, the White House surprised everyone with a landmark Executive Order to regulate AI in the US. In this article, I delve into the Executive Order and advance some ideas on how it can impact the whole AI regulatory landscape.

Biden’s AI Meeting: The US adds another piece to the evolving landscape of AI Regulation

Back in July 2023, AI regulation in the US was considered in its ”early days”. While the European Union prepares for the implementation of its long-awaited EU AI Act –which is currently in the last stage of the legislative process– and other key countries, including China, Canada, and the UK, follow suit, the US is still immersed in debates about the best approach to regulate AI. 

Amid rising concerns over the existential threats of next-generation AI systems, US President Joe Biden made an unexpected move that is likely to revolutionise the AI regulation landscape: the announcement of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. With this ambitious Executive Order (EO), the US is finally showing its cards, sending a clear message on how it plans to lead the global AI race.

As historical allies with similar values and visions, it’s not surprising that the EO and the EU AI Act have many things in common. However, there are also significant differences between the two texts, that cast light on the different strengths and weaknesses of these two players to regulate AI.

This article will take you through the most important takeaways from the recently announced EO. By adopting a comparative law approach, we will also analyze the main differences between the EO and the EU AI Act. Let’s get started.

Background: The Long Way to Regulate AI in the US

Compared to other countries, the US can be considered a laggard in AI regulation. There are several reasons for this seeming paralysis. As we explained in our previous article analysing the global landscape on AI regulation, advancing norms to regulate emerging technologies is a challenging task. Finding the right balance between responsible use and innovation is hard, and there is always the risk that too much regulatory pressure may limit the benefits these technologies can provide. 

This dilemma is particularly acute in the US, where most of the Big Tech is located and self-governance and voluntary regulations have historically played a more important role than in Europe. 

In addition to that, partisan polarisation in the US Congress is adding complexity to the law-making process. Advancing legislation on sensitive topics, including emerging technologies with potential impacts on civil rights, is a difficult undertaking, as illustrated by the continuous failures of the US government to pass a federal privacy law.

Against this backdrop, the timid and partial US actions to regulate AI come with little surprise –despite being a global leader in AI innovation. Before the EO, the country had only managed to advance a Blueprint for a non-binding AI Bill of Rights, a bunch of initiatives at a state level, and a commitment of voluntary safeguards from the major AI developers, including OpenAI, Google, Microsoft, Meta, and Amazon.

Understanding Biden's AI Executive Order: Why and Why Now

Conscious of the disruptive power of AI, especially the many next-generation tools within the rubric of generative AI, like ChatGPT, President Biden surprised the world with the announcement of his sweeping plan to regulate AI.

Biden has advanced his ambitious agenda using his prerogatives as the head of the White House to issue executive orders, that is directives by the president of the United States that manage operations of the federal government. 

There are two compelling reasons for the use of an executive order. The first has to do with feasibility. Aware of the current difficulties in getting Congress involved in the development of a comprehensive, federal AI law, Biden has opted to act solely within the limits of the executive branch’s power. As we will see in the next section, this choice has important implications for the force and scope of the EO, whose actions are mostly limited to the authorities of the federal government, as we will see in more detail later.

The second reason has to do with timing. Executive orders are not subject to procedures for approval, the president can issue them at any time. The EO was announced two days before the UK AI Safety Summit 2023, where key governments and tech companies have gathered to discuss the potential risks of AI. The summit provided a great opportunity for the US to advance its AI agenda –the content of the EO was indeed presented by Vice President Kamala Harris during the event– and catch up with other players in the race to lead AI, including the EU and China. 

The White House Pledge on AI: Leading by Example in AI Regulation

Despite the unusual length of the EO –it comprises over 100 pages–, the measure is far from establishing a regulatory framework for AI in the same fashion as the EU AI Act. That would require enacting new laws and regulations, something that only the Congress –and federal agencies through rulemaking authority delegated by the Congress– can do.

Instead, the EO mainly addresses the use of AI by the federal administration. This is not trivial, though. The federal go vernment is one of the biggest public entities in the US, encompassing the executive departments, the organisations belonging to the Executive Office of the President, and several independent offices, such as NASA and the Environmental Protection Agency. The federal administration thus has powers to act in a wide range of strategic areas, such as defence, commerce, education, energy, justice, and international relations.

The high degree of political leverage that the federal administration enjoys is also present in the economy of the country. As one of the largest customers in the country, the financial muscle of the federal administration alone can mobilise huge resources, with effect on companies across industries and sectors.

Leading by example, the White House wants to consolidate its vision for responsible development and use of AI. Yet most of the mandates are addressed to the bodies of federal administration, the EO de facto points to an entire ecosystem of public, private, academic, and international stakeholders. 

The EO will create a sandbox to test the behaviour of AI providers under certain regulatory constraints. If the experiment works, the practices, principles, and requirements envisioned in the EO could eventually turn into industry standards. At the same time, if AI providers and users become familiar with a certain degree of regulation, the fears of regulatory pressure by the AI industry may decrease. Ultimately, this favourable context could pave the way for Congress to advance a stronger, legally binding framework to address issues like bias, risk management, privacy, and consumer protection.

Seizing Opportunities: A Policy Agenda on Generative AI

One of the main challenges when it comes to regulating emerging technologies like AI is that innovation often runs faster than law-making. An illustrative case is the EU AI Act. When the European Commission published its first draft of the Act in 2021, AI tools like ChatGPT didn’t exist and generative AI was only starting to take pace. 

Following the popularisation of these tools, the first proposal of the Act is hardly prepared to address the risks and opportunities that generative AI provides. The European Parliament later released another version with several amendments to address the corruption of tools like ChatGPT, but until the Act makes it to the end of the legislative process, we will not know how exactly the EU is going to regulate generative AI.

This is not the case with the US, which has had more time to follow the evolution of generative AI and develop an approach to it. In particular, the EO lays down particular requirements for the so-called “dual-use foundation models”, defined as:

“An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.” 

A Vertical, Holistic Approach to AI Regulatory Compliance

The White House's approach to AI is not only timely, but it’s also ambitious in its attempt to regulate AI holistically. 

The EO envisions the creation of a regulatory framework to address the technical aspects of machine learning and deep learning algorithms. Furthermore, the document advances a comprehensive policy agenda that affects not only the AI industry itself, but also the many areas and sectors that will likely be affected by the AI revolution. Finally, the EO also takes into consideration the international and cross-border nature of AI, advancing mandates in areas perceived as levers to boost the US leadership in AI, including diplomacy efforts and measures in immigration to attract AI talent in the country.

In this vein, the EO advances a vertical, sectoral approach to AI, where every branch of the federal government will be responsible for developing guidelines and regulations according to the specific sector. To coordinate the progress of the different federal departments and agencies, the EO creates the White House Artificial Intelligence Council. 

This vertical, sector-based approach contrasts with the horizontal, risk-based approach adopted in the EU AI Act, where AI systems are classified according to their level of risk, and not their sectors. While the level of risk is not a clear defining criterion in the EO, it still plays an important role, as illustrated in the definition of the dual-use foundation models. Hence, it’s very likely that AI systems perceived as less dangerous will have to comply with fewer requirements in future federal regulations

Analyzing the Impact: Sectors Affected by the US AI Regulations

Below you can find an overview of the areas affected by the EO:

  • Cybersecurity. The EO promotes the use of AI technologies to increase cybersecurity. Equally, advances the development of guides and measures to evaluate the risks of AI, particularly generative AI tools, on cybersecurity and critical infrastructure,
  • Privacy. The US still lacks a federal data protection law. Yet this absence considerably limits the capacity of the White House, the EO still casts some light on this domain. In particular, the EO emphasises the adoption by federal agencies of so-called privacy-enhancing technologies.
    Immigration. The White House is going to use its powers and resources to attract and retain AI talent in the country, including easing visa and Green Card application processes
    Competition. The EO aims to create a level playing field for AI that works for companies of all sizes. This translates into support for small companies and potential actions to address uncompetitive or colluding behaviour in the AI industry. 
  • Copyright. One of the main concerns following the popularisation of generative AI tools is the potential massive violation of copyright. The EO aims to provide legal certainty for AI-supported inventors and creators affected by AI tools. Interestingly, the EO aims to develop watermark techniques for AI-generated content.
  • Labour. The EO takes into consideration the concerns of workers who may be affected by the disruptive force of AI. In particular, the EO advances the creation of training programmes and other measures for workers facing labour disputes and increases worker protections.
  • Equity and Civil Rights. To protect minority groups from potential biases and discrimination by AI systems, the EO outlines measures to ensure equity and protect civil rights.
  • Housing. Closely related to the previous area, the EO aims to establish safeguards to ensure fair access to housing.
  • Health. The EO also sets an agenda to mitigate the risks of AI in healthcare while leveraging its benefits 
  • Transportation. The same goes for passengers. The aim is to ensure the correct and safe integration of AI in transport.
  • Education. In addition to investing more resources to develop AI skills in educational centres (with an important role of universities), the EO also aims at ensuring responsible and nondiscriminatory uses of AI in education. programmes
  • International Relations. The EO aims for the progressive development of common guidelines, practices, and frameworks among key allies abroad, with a particular focus on Western countries. 

Towards an AI Regulatory Framework in the US

The core of the EO focuses on AI safety and security. AI providers regulated by or willing to do business with the federal government will have to comply with some requirements to ensure safe deployment and responsible use of AI. 

The EO pays special attention to powerful dual-use foundation models like ChatGPT. Providers of these tools will have to share their safety test results and other critical information with the US government before reaching the market. With such requirements, the EO seems to mimic the ex-ante evaluation that high-risk AI systems will have to follow.

The details on the technical requirements will be developed by the National Institute of Standards and Technology (NIST), which will play a critical role in developing standard tools and red-teaming tests. With these actions, the EO envisions the creation of a regulatory framework very much in the fashion of the EU AI Act. While nothing is already formal –the NIST has 270 days to fulfil the mandate–, collaboration with Europe is very likely to advance common standards. 

The Critical Question: Is Biden's New Executive Order on AI Enough?

The EO represents a step forward in the US efforts to regulate AI and lead the AI race. Yet there are still many uncertainties surrounding the EO. 

First of all, it’s still uncertain whether the federal administration will be able to meet the expectations of the White House. To lead by example, it’s important to have the necessary skills and experience. But AI, especially generative AI, is a very recent topic, It’s also a rapidly evolving technology, which makes it difficult to catch up with the latest developments. Despite the efforts envisioned in the EO to prepare the federal administration for the AI revolution, this process will take time, money, and political willingness. 

On the other hand, although the EO has more teeth than the previous voluntary commitments by major AI companies, many of the provisions in the text aren’t legally binding, meaning that the government doesn’t have the power to enforce them. This will require future regulation. While the White House wants federal agencies to play a critical role in advancing sectoral AI regulation, no regulation will be possible without Congress. Here is where President Biden will have to work hard to get the support of the Republicans.

Conclusion: The future of AI Regulation

Congratulations on making it to the end of the article. The EO represents a turning point in the US strategy to lead AI. It advances an ambitious policy agenda to address AI holistically. One year after the release of ChatGPT, the White House has had the time to track the evolution of generative AI. This translates into a better perspective on the benefits and risks of AI, which has allowed the White House to advance a multidimensional programme with implications in nearly every aspect of the US economy. 

It’s still early to know whether the White House will be successful. The EO is just the first step in the development of a regulatory, legally binding framework for AI. Ahead, the federal agencies will have to propose sectoral regulation, following a law-making process that will require the support of the Republicans, given the current equilibrium of forces in Congress.

If the vision of the White House is to succeed, the US will deploy a regulatory framework where the technical aspects of AI will be central. Here is where AI quality tools like Giskard join the scene. Designed as a developer-oriented solution for quality testing, Giskard aims to help AI providers become fully compliant with upcoming AI regulations. 

Giskard allows you to evaluate AI models collaboratively, test your systems with exhaustive, state-of-the-art test suites and protect AI systems against the risk of bias. Try our open-source product and get ready for the age of AI regulation.

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.