The Giskard hub
Giskard Vision is a new module in our open-source library designed to assess and improve computer vision models. It offers automated detection of performance issues, biases, and ethical concerns in image classification, object detection, and landmark detection tasks. The article provides a step-by-step guide on how to integrate Giskard Vision into existing workflows, enabling data scientists to enhance the reliability and fairness of their computer vision systems.
Giskard has integrated with NVIDIA NeMo Guardrails to enhance the safety and reliability of LLM-based applications. This integration allows developers to better detect vulnerabilities, automate rail generation, and streamline risk mitigation in LLM systems. By combining Giskard with NeMo Guardrails organizations can address critical challenges in LLM development, including hallucinations, prompt injection and jailbreaks.
The Council of Europe has signed the world's first AI treaty marking a significant step towards global AI governance. This Framework Convention on Artificial Intelligence aligns closely with the EU AI Act, adopting a risk-based approach to protect human rights and foster innovation. The treaty impacts businesses by establishing requirements for trustworthy AI, mandating transparency, and emphasizing risk management and compliance.
The EU AI Act, published on July 12, 2024, establishes the world's first comprehensive regulatory framework for AI technologies, with a gradual implementation timeline from 2024 to 2027. It adopts a risk-based approach, imposing varying requirements on AI systems based on their risk level.
The ArGiMi consortium, including Giskard, Artefact and Mistral AI, has won a France 2030 project to develop next-generation French LLMs for businesses. Giskard will lead efforts in AI safety, ensuring model quality, conformity, and security. The project will be open-source ensuring collaboration, and aiming to make AI more reliable, ethical, and accessible across industries.
Giskard has integrated with Databricks MLflow to enhance LLM testing and deployment. This collaboration allows AI teams to automatically identify vulnerabilities, generate domain-specific tests, and log comprehensive reports directly into MLflow. The integration aims to streamline the development of secure, reliable, and compliant LLM applications, addressing key risks like prompt injection, hallucinations, and unintended data disclosures.
Releasing an upgraded version of Giskard's LLM scan for comprehensive vulnerability assessments of LLM applications. New features include more accurate detectors through optimized prompts and expanded multi-model compatibility supporting OpenAI, Mistral, Ollama, and custom local LLMs. This article also covers an initial setup guide for evaluating LLM apps.
Our new course in collaboration with DeepLearningAI team provides training on red teaming techniques for Large Language Model (LLM) and chatbot applications. Through hands-on attacks using prompt injections, you'll learn how to identify vulnerabilities and security failures in LLM systems.
Introducing our LLM Red Teaming service, designed to enhance the safety and security of your LLM applications. Discover how our team of ML Researchers uses red teaming techniques to identify and address LLM vulnerabilities. Our new service focuses on mitigating risks like misinformation and data leaks by developing comprehensive threat models.
The Council of the EU has recently voted unanimously on the final version of the European AI Act. It’s a significant step forward in its efforts to legislate the first AI law in the world. The Act establishes a regulatory framework for the safe use and development of AI, categorizing AI systems according to their associated risk. In the coming months, the text will enter the last stage of the legislative process, where the European Parliament will have a final vote on the AI Act.
2023 retrospective, covering people, company, customers, and product news, also offers a glimpse into what's next for 2024. Our team keeps growing, with new offices in Paris, new customers, and product features. Our GitHub repo has nearly reached 2500 stars, and we were Product of the Day on Product Hunt. All this and more in our 2023 review.
The EU's AI Act establishes rules for AI use and development, focusing on ethical standards and safety. It categorizes AI systems, highlights high-risk uses, and sets compliance requirements. This legislation, a first in global AI governance, signals a shift towards responsible AI innovation in Europe.
One year after the launch of ChatGPT, regulators worldwide are still figuring out how to regulate Generative AI. The EU is going through intense debates on how to close the so-called 'EU AI Act' after two years of legislative process. At the same time, only one month ago, the White House surprised everyone with a landmark Executive Order to regulate AI in the US. In this article, I delve into the Executive Order and advance some ideas on how it can impact the whole AI regulatory landscape.
We have just launched Giskard v2, extending the testing capabilities of our library and Hub to Large Language Models. Support our launch on Product Hunt and explore our new integrations with Hugging Face, Weights & Biases, MLFlow, and Dagshub. A big thank you to our community for helping us reach over 1900 stars on GitHub.
In this article we will present the challenges and approaches to AI Regulation in major jurisdictions such as the European Union, the United States, China, Canada and the UK. Explore the growing impact of AI on society and how AI quality tools like Giskard ensure reliable models and compliance.
Giskard's Co-Founder and CPO, Jean-Marie John-Mathews was recently interviewed by Safety Detectives and he shared insights into the company's mission to advance AI Safety and Quality. In this interview, Jean-Marie explains the strategies, vulnerabilities, and ethical considerations at the forefront of AI technology, as Giskard bridges the gap between AI models and real-world applications.
In this post, we introduce OWASP's first version of the Top 10 for LLM, which identifies critical security risks in modern LLM systems. It covers vulnerabilities like Prompt Injection, Insecure Output Handling, Model Denial of Service, and more. Each vulnerability is explained with examples, prevention tips, attack scenarios, and references. The document serves as a valuable guide for developers and security practitioners to protect LLM-based applications and data from potential attacks.
In a significant move towards AI regulation, President Biden convened a meeting with top tech companies, leading to a White House pledge that emphasizes AI safety and transparency. Companies like Google, Amazon, and OpenAI have committed to pre-release system testing, data transparency, and AI-generated content identification. As tech giants signal their intent, concerns remain regarding the specificity of their commitments.
We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.
Explore key insights from Clément Delangue's testimony to the US Congress on Open-Science and Open-Source AI. Understand the importance of Open-Source & Open-Science to democratize AI technology and promote ethical AI development that benefits all.
Giskard's new beta release enables to quickly scan your AI model and detect vulnerabilities directly in your notebook. The new beta also includes simple one-line installation, automated test suite generation and execution, improved user experience for collaboration on testing dashboards, and a ready-made test catalog.
In light of the widespread and rapid adoption of ChatGPT and other Generative AI models, which have brought new risks, the EU Parliament has accelerated its agenda on AI. The vote that took place on May 11, 2023 represents a significant milestone in the path toward the adoption of the first comprehensive AI regulation.
During this exclusive interview for BFM Business, Alex Combessie, our CEO and co-founder, spoke about the potential risks of AI for companies and society. As new AI technologies like ChatGPT emerge, concerns about the dangers of untested models have increased. Alex stresses the importance of Responsible AI, which involves identifying ethical biases and preventing errors. He also discusses the future of EU regulations and their potential impact on businesses.
With Giskard’s SafeGPT you can say goodbye to errors, biases & privacy issues in LLMs. Its features include an easy-to-use browser extension and a monitoring dashboard (for ChatGPT users), and a ready-made and extensible quality assurance platform for debugging any LLM (for LLM developers)
With Giskard’s new Slice feature, we introduce the possibility to identify business areas in which your AI models underperform. This will make it easier to debug performance biases or identify spurious correlations. We have also added an export/import feature to share your projects, as well as other minor improvements.
AI poses new trust, risk and security management requirements that conventional controls do not address. This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions.
Giskard's retrospective of 2022, covering people, company, customers and product news, and a look into what's next for 2023. 2022 was a pivotal year, as we went from 3 to 10 people, raised our first round, expanded our product and grew our customer base. We share a special announcement, and unveil the key features that will come in Giskard 2.0 this year.
This interview of Jean-Marie John-Mathews, co-founder of Giskard, discusses the ethical & security concerns of AI. While AI is not a new thing, recent developments like chatGPT bring a leap in performance that require rethinking how AI has been built. We discuss all the fear and fantasy about AI, how it can pose biases and create industrial incidents. Jean-Marie suggests that protection of AI resides in tests and safeguards to ensure responsible AI.
The funding led by Elaia, with participation from Bessemer Venture Partners and notable angel investors, will accelerate the development of an enterprise-ready platform to help companies test, audit & ensure the quality of AI models.
With Giskard’s new External ML Worker feature, we introduce a gRPC tunnel to reverse the client-server communication so that data scientists can re-use an existing Python code environment for model execution by Giskard.
Why do great Data Scientists & ML Engineers love writing tests? Two customer case studies on improving model robustness and ensuring AI Ethics.
What are the preferences of ML Engineers in terms of UX? A summary of key learnings, and how we implemented them in Giskard's latest release.
We explain why Giskard changed its value proposition, and how we translated it to a new visual identity
The Open Beta of Giskard's AI Test feature: an automated way to test your ML models and ensure performance, robustness, and ethics
The Giskard team explains the undergoing shift toward AI Quality, and how we launched the first community for AI Quality Innovators
We explain why the Giskard team decided to go Open-Source, how we launched our first version, and what's next for our Community.
The Giskard team wishes you a happy 2022! Here is a summary of what we accomplished in 2021.