G
News
July 27, 2023
4 min read

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

LLM Scan: Advanced LLM vulnerability detection
Blanca Rivera Campos
LLM Scan: Advanced LLM vulnerability detection
LLM Scan: Advanced LLM vulnerability detection

Hi there,

The Giskard team hopes you're having a good week! This month we have the pleasure to announce that we have reached a significant milestone of 1,000 Github stars!

In addition to this achievement, we have secured strategic funding of 3M€ from the French Public Investment Bank and the European Commission. This will enable us to expand the capabilities of our Giskard platform, to help companies comply with upcoming AI regulation & standards.

But that's not all, our LLM scan feature has been upgraded for you to detect even more hidden vulnerabilities! 🙌

👉 Scan your LLM

👥 Community news

1000 Stars on our GitHub repository! 🌟

🙌 Special thanks to our amazing community for their support! We've reached an incredible milestone of 1K stars on our GitHub repository, and it wouldn't have been possible without you.

Check out our repository

😻 We also want to extend our gratitude to the following ML thought leaders and content creators, who made it all possible:

💰 We secured a 3M€ grant to enhance our solution for ML testing

No alt text provided for this image
EIC Accelerator cut-off 2023

Giskard has secured a 3M€ grant through strategic investments from Bpifrance, the French public investment bank, and the European Commission's EIC Accelerator fund.

This grant will fuel our platform's expansion into Computer Vision and Green AI, while also enabling the creation of a SaaS solution for Trustworthy AI. The goal is to help companies ensure compliance with the forthcoming EU AI Act.

As winners of the Bpifrance grant (in the i-Lab category), we were selected among 237 projects distinguished by the French government under the strategic plan France 2030. Our success underscores our dedication to developing an AI Quality Assurance platform that prioritizes ethics and fairness.

The EIC Accelerator has been instrumental in supporting innnovative start-ups and SMEs across Europe since 2018. We are humbled to be one of the selected 51 companies to receive this prestigious funding following a highly competitive process, with a success rate < 5%. The EIC funding will undoubtedly accelerate our mission to improve the Giskard platform and ensure AI safety.

Many thanks to Bpifrance and the European Commission's EIC Accelerator fund for their invaluable support. We are honored and excited about the opportunities ahead! 🙏

🔍 LLM scan: Advanced vulnerability detection

Our team has been working to expand the capabilities of our LLM scan feature, now enabling it to detect more vulnerabilities, including harmfulness, stereotypes, and ethical concerns. With this new version, you can compare different LLMs across more vulnerabilities than ever before.

Whether your favorite model provider is closed source (think OpenAI, Cohere, and co), or whether you're a user of open-source LLMs such as Llama-2, you can now make informed decisions and choose the best & safest option for your specific needs.

No alt text provided for this image
LLM scan feature

📒 Steps to run it in your notebook

After installing the different libraries, load your LLM chain (via LangChain) and your dataset (via Pandas):

Then, you can scan your model to detect vulnerabilities in a single line of code!

👉 Try it in this notebook

🍿 Video tutorials

We're launching a series of short tutorials to guide you in testing your ML models with our open-source library.

In this first tutorial, we show you to easily install the Giskard Python library. In just 4 lines of code, you will be able to discover hidden vulnerabilities in your models, such as:

✅ Performance biases

✅ Data leakage

✅ Spurious correlations

✅ Overconfidence issues

✅ Underconfidence issues

▶ Make sure to keep an eye on our YouTube channel as we'll be adding even more video tutorials! We'll be providing guidance on using Giskard, testing your ML models, and how to make ML models robust, reliable & ethical.

🗞️ What are the latest news?

The Open-Source AI Imperative: Key Takeaways from Hugging Face CEO's Testimony to the US Congress

No alt text provided for this image
Hugging Face CEO's testimony to the Us Congress

🤗 Recently, Hugging Face CEO testified to the US Congress on Open-Science and Open-Source AI. We wrote an article where we explore the key insights from Clément Delangue's testimony highlighting the importance of Open-Source & Open-Science to democratize AI technology and promote ethical AI development that benefits all.

🗺️ What's next?

Watch out for the wave of new releases coming up in the next few weeks as we gear up to come out of beta mode. Upcoming features will allow you to debug your ML models directly in the Giskard server, with the help of user interface tooltips and customized suggestions.

We're also working on tons of new integrations (MLFlow, HuggingFace) with your favorite open-source MLOps and LLMOps tools to optimize your workflows. 👀

Try it in Colab

Stay tuned for the latest updates!

Thank you so much, and see you soon! ❤️

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

Hi there,

The Giskard team hopes you're having a good week! This month we have the pleasure to announce that we have reached a significant milestone of 1,000 Github stars!

In addition to this achievement, we have secured strategic funding of 3M€ from the French Public Investment Bank and the European Commission. This will enable us to expand the capabilities of our Giskard platform, to help companies comply with upcoming AI regulation & standards.

But that's not all, our LLM scan feature has been upgraded for you to detect even more hidden vulnerabilities! 🙌

👉 Scan your LLM

👥 Community news

1000 Stars on our GitHub repository! 🌟

🙌 Special thanks to our amazing community for their support! We've reached an incredible milestone of 1K stars on our GitHub repository, and it wouldn't have been possible without you.

Check out our repository

😻 We also want to extend our gratitude to the following ML thought leaders and content creators, who made it all possible:

💰 We secured a 3M€ grant to enhance our solution for ML testing

No alt text provided for this image
EIC Accelerator cut-off 2023

Giskard has secured a 3M€ grant through strategic investments from Bpifrance, the French public investment bank, and the European Commission's EIC Accelerator fund.

This grant will fuel our platform's expansion into Computer Vision and Green AI, while also enabling the creation of a SaaS solution for Trustworthy AI. The goal is to help companies ensure compliance with the forthcoming EU AI Act.

As winners of the Bpifrance grant (in the i-Lab category), we were selected among 237 projects distinguished by the French government under the strategic plan France 2030. Our success underscores our dedication to developing an AI Quality Assurance platform that prioritizes ethics and fairness.

The EIC Accelerator has been instrumental in supporting innnovative start-ups and SMEs across Europe since 2018. We are humbled to be one of the selected 51 companies to receive this prestigious funding following a highly competitive process, with a success rate < 5%. The EIC funding will undoubtedly accelerate our mission to improve the Giskard platform and ensure AI safety.

Many thanks to Bpifrance and the European Commission's EIC Accelerator fund for their invaluable support. We are honored and excited about the opportunities ahead! 🙏

🔍 LLM scan: Advanced vulnerability detection

Our team has been working to expand the capabilities of our LLM scan feature, now enabling it to detect more vulnerabilities, including harmfulness, stereotypes, and ethical concerns. With this new version, you can compare different LLMs across more vulnerabilities than ever before.

Whether your favorite model provider is closed source (think OpenAI, Cohere, and co), or whether you're a user of open-source LLMs such as Llama-2, you can now make informed decisions and choose the best & safest option for your specific needs.

No alt text provided for this image
LLM scan feature

📒 Steps to run it in your notebook

After installing the different libraries, load your LLM chain (via LangChain) and your dataset (via Pandas):

Then, you can scan your model to detect vulnerabilities in a single line of code!

👉 Try it in this notebook

🍿 Video tutorials

We're launching a series of short tutorials to guide you in testing your ML models with our open-source library.

In this first tutorial, we show you to easily install the Giskard Python library. In just 4 lines of code, you will be able to discover hidden vulnerabilities in your models, such as:

✅ Performance biases

✅ Data leakage

✅ Spurious correlations

✅ Overconfidence issues

✅ Underconfidence issues

▶ Make sure to keep an eye on our YouTube channel as we'll be adding even more video tutorials! We'll be providing guidance on using Giskard, testing your ML models, and how to make ML models robust, reliable & ethical.

🗞️ What are the latest news?

The Open-Source AI Imperative: Key Takeaways from Hugging Face CEO's Testimony to the US Congress

No alt text provided for this image
Hugging Face CEO's testimony to the Us Congress

🤗 Recently, Hugging Face CEO testified to the US Congress on Open-Science and Open-Source AI. We wrote an article where we explore the key insights from Clément Delangue's testimony highlighting the importance of Open-Source & Open-Science to democratize AI technology and promote ethical AI development that benefits all.

🗺️ What's next?

Watch out for the wave of new releases coming up in the next few weeks as we gear up to come out of beta mode. Upcoming features will allow you to debug your ML models directly in the Giskard server, with the help of user interface tooltips and customized suggestions.

We're also working on tons of new integrations (MLFlow, HuggingFace) with your favorite open-source MLOps and LLMOps tools to optimize your workflows. 👀

Try it in Colab

Stay tuned for the latest updates!

Thank you so much, and see you soon! ❤️

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.