G
News
June 6, 2023
4 min read

Giskard’s new beta is out! ⭐ Scan your model to detect hidden vulnerabilities

Giskard's new beta release enables to quickly scan your AI model and detect vulnerabilities directly in your notebook. The new beta also includes simple one-line installation, automated test suite generation and execution, improved user experience for collaboration on testing dashboards, and a ready-made test catalog.

Scan your AI model to find vulnerabilities
Blanca Rivera Campos
Scan your AI model to find vulnerabilities
Scan your AI model to find vulnerabilities

We have released Giskard 2 in beta 🚀

Hello there,

This month we have some exciting news: Giskard's new beta release is officially out today!

We’ve listened to the community feedback (your feedback!), to significantly improve our open-source solution for AI Safety.

This is the results of months of hard work, during which we've rebuilt our product from the ground up. Our goal is to make it as easy and as fast as possible for data scientists to integrate Giskard in their development workflow, in order to quickly detect vulnerabilities in AI models.

👉 Check out our new Quickstart documentation

Giskard’s new beta introduces some valuable new features to improve the reliability of your ML models:

  • Scan your model to detect vulnerabilities directly in your notebook 📒
  • Automatically generate and run test suites
  • Improved UX for data scientists to collaborate on testing dashboards
  • Ready-made and reusable test catalog

💻 Easy installation

We’ve made the installation process much easier, so now in 1 line of code, you can install our Python library and start scanning your model to find vulnerabilities.

And of course, it integrates with popular ML libraries such as PyTorch, TensorFlow, Scikit-learn, Hugging Face, and LangChain. It’s also compatible with a wide range of models, from tabular to large language models (LLMs).

🔍  Scan your model and detect vulnerabilities

Scan your model to detect vulnerabilities

We’re excited to introduce our new scan feature that quickly allows you to explore your model’s behavior before running any tests, simply using a few lines of code.

Once you have wrapped your model and dataset, you can use the scan feature to find vulnerabilities using:

This will produce a widget in your notebook that allows you to explore the detected issues, such as:

  • Performance bias
  • Unrobustness
  • Overconfidence
  • Data leakage
  • Unethical behavior
  • Stochasticity

📋 Generate and run your test suite

If the automatic scan detects issues in your model, you can automatically generate a set of tests that dive deeper into the detected errors. The test execution is flexible and customizable based on your specific needs; you simply need to provide the necessary parameters through the run method.

Combining the scan feature and automatic test suites, data scientists will be able to easily identify issues in their models, saving time and ensuring model performance and reliability. You can then interactively debug the problems by uploading the generated test suite to Giskard UI.

🎨  Improved UX

Giskard UI - test suite

Based on user feedback on the UI being too clunky, we’ve polished it and included improvements such as:

  • To make your onboarding easier we’ve added a step-by-step guide showing you how to connect your ML worker, how to upload a test to Giskard, how to use the scan feature, and more.
  • Test suites are more readable, making it easier to see passed/failed tests, compare models, and share testing dashboards with your team.

📚 Reusable and ready-made test catalog

Reusable and ready-made test catalog

To simplify and accelerate the testing process, we’ve introduced a catalog of pre-built tests, data slicing functions, and transformation functions. This eliminates the need to create new testing artifacts from scratch for every new use case.

🗺 More to come

We are actively developing the next releases. Upcoming features will include detection of spurious correlations in the scan, and automatic suggestions of which test to write while debugging.

Try our new beta

Stay tuned for the latest updates and advancements in our quest to provide you with the best tool for AI Quality Assurance.


Thank you so much, and see you soon! ❤️

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Giskard’s new beta is out! ⭐ Scan your model to detect hidden vulnerabilities

Giskard's new beta release enables to quickly scan your AI model and detect vulnerabilities directly in your notebook. The new beta also includes simple one-line installation, automated test suite generation and execution, improved user experience for collaboration on testing dashboards, and a ready-made test catalog.

We have released Giskard 2 in beta 🚀

Hello there,

This month we have some exciting news: Giskard's new beta release is officially out today!

We’ve listened to the community feedback (your feedback!), to significantly improve our open-source solution for AI Safety.

This is the results of months of hard work, during which we've rebuilt our product from the ground up. Our goal is to make it as easy and as fast as possible for data scientists to integrate Giskard in their development workflow, in order to quickly detect vulnerabilities in AI models.

👉 Check out our new Quickstart documentation

Giskard’s new beta introduces some valuable new features to improve the reliability of your ML models:

  • Scan your model to detect vulnerabilities directly in your notebook 📒
  • Automatically generate and run test suites
  • Improved UX for data scientists to collaborate on testing dashboards
  • Ready-made and reusable test catalog

💻 Easy installation

We’ve made the installation process much easier, so now in 1 line of code, you can install our Python library and start scanning your model to find vulnerabilities.

And of course, it integrates with popular ML libraries such as PyTorch, TensorFlow, Scikit-learn, Hugging Face, and LangChain. It’s also compatible with a wide range of models, from tabular to large language models (LLMs).

🔍  Scan your model and detect vulnerabilities

Scan your model to detect vulnerabilities

We’re excited to introduce our new scan feature that quickly allows you to explore your model’s behavior before running any tests, simply using a few lines of code.

Once you have wrapped your model and dataset, you can use the scan feature to find vulnerabilities using:

This will produce a widget in your notebook that allows you to explore the detected issues, such as:

  • Performance bias
  • Unrobustness
  • Overconfidence
  • Data leakage
  • Unethical behavior
  • Stochasticity

📋 Generate and run your test suite

If the automatic scan detects issues in your model, you can automatically generate a set of tests that dive deeper into the detected errors. The test execution is flexible and customizable based on your specific needs; you simply need to provide the necessary parameters through the run method.

Combining the scan feature and automatic test suites, data scientists will be able to easily identify issues in their models, saving time and ensuring model performance and reliability. You can then interactively debug the problems by uploading the generated test suite to Giskard UI.

🎨  Improved UX

Giskard UI - test suite

Based on user feedback on the UI being too clunky, we’ve polished it and included improvements such as:

  • To make your onboarding easier we’ve added a step-by-step guide showing you how to connect your ML worker, how to upload a test to Giskard, how to use the scan feature, and more.
  • Test suites are more readable, making it easier to see passed/failed tests, compare models, and share testing dashboards with your team.

📚 Reusable and ready-made test catalog

Reusable and ready-made test catalog

To simplify and accelerate the testing process, we’ve introduced a catalog of pre-built tests, data slicing functions, and transformation functions. This eliminates the need to create new testing artifacts from scratch for every new use case.

🗺 More to come

We are actively developing the next releases. Upcoming features will include detection of spurious correlations in the scan, and automatic suggestions of which test to write while debugging.

Try our new beta

Stay tuned for the latest updates and advancements in our quest to provide you with the best tool for AI Quality Assurance.


Thank you so much, and see you soon! ❤️

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.