G

All Knowledge

The Giskard hub

Giskard integrates with LiteLLM to simplify LLM agent testing
News

[Release notes] Giskard integrates with LiteLLM: Simplifying LLM agent testing across foundation models

Giskard's integration with LiteLLM enables developers to test their LLM agents across multiple foundation models. The integration enhances Giskard's core features - LLM Scan for vulnerability assessment and RAGET for RAG evaluation - by allowing them to work with any supported LLM provider: whether you're using major cloud providers like OpenAI and Anthropic, local deployments through Ollama, or open-source models like Mistral.

Blanca Rivera Campos
Blanca Rivera Campos
View post
EU's AI liability directives
News

AI Liability in the EU: Business guide to Product (PLD) and AI Liability Directives (AILD)

The EU is establishing an AI liability framework through two key regulations: the Product Liability Directive (PLD), taking effect in 2024, and the proposed AI Liability Directive (AILD). The PLD introduces strict liability for defective AI systems and software, while the AILD addresses negligent use, though its final form remains under debate. Learn in this article the key points of these regulations and how they will impact businesses.

Stanislas Renondin
Stanislas Renondin
View post
Giskard-vision: Evaluate Computer Vision tasks
News

Giskard Vision: Enhance Computer Vision models for image classification, object an landmark detection

Giskard Vision is a new module in our open-source library designed to assess and improve computer vision models. It offers automated detection of performance issues, biases, and ethical concerns in image classification, object detection, and landmark detection tasks. The article provides a step-by-step guide on how to integrate Giskard Vision into existing workflows, enabling data scientists to enhance the reliability and fairness of their computer vision systems.

Benoît Malézieux - Machine Learning Researcher
Benoît Malézieux
View post
Giskard integrates with NVIDIA NeMo
News

Evaluating LLM applications: Giskard Integration with NVIDIA NeMo Guardrails

Giskard has integrated with NVIDIA NeMo Guardrails to enhance the safety and reliability of LLM-based applications. This integration allows developers to better detect vulnerabilities, automate rail generation, and streamline risk mitigation in LLM systems. By combining Giskard with NeMo Guardrails organizations can address critical challenges in LLM development, including hallucinations, prompt injection and jailbreaks.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Council of Europe - AI Treaty [1]
News

Global AI Treaty: EU, UK, US, and Israel sign landmark AI regulation

The Council of Europe has signed the world's first AI treaty marking a significant step towards global AI governance. This Framework Convention on Artificial Intelligence aligns closely with the EU AI Act, adopting a risk-based approach to protect human rights and foster innovation. The treaty impacts businesses by establishing requirements for trustworthy AI, mandating transparency, and emphasizing risk management and compliance.

Stanislas Renondin
Stanislas Renondin
View post
EU AI Act published in the EU Official Journal
News

The EU AI Act published in the EU Official Journal: Next steps for AI Regulation

The EU AI Act, published on July 12, 2024, establishes the world's first comprehensive regulatory framework for AI technologies, with a gradual implementation timeline from 2024 to 2027. It adopts a risk-based approach, imposing varying requirements on AI systems based on their risk level.

David Mercado
David Mercado
View post
ArGiMi Consortium
News

Giskard leads GenAI Evaluation in France 2030's ArGiMi Consortium

The ArGiMi consortium, including Giskard, Artefact and Mistral AI, has won a France 2030 project to develop next-generation French LLMs for businesses. Giskard will lead efforts in AI safety, ensuring model quality, conformity, and security. The project will be open-source ensuring collaboration, and aiming to make AI more reliable, ethical, and accessible across industries.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Giskard + Databricks integration
News

Partnership announcement: Bringing Giskard LLM evaluation to Databricks

Giskard has integrated with Databricks MLflow to enhance LLM testing and deployment. This collaboration allows AI teams to automatically identify vulnerabilities, generate domain-specific tests, and log comprehensive reports directly into MLflow. The integration aims to streamline the development of secure, reliable, and compliant LLM applications, addressing key risks like prompt injection, hallucinations, and unintended data disclosures.

Alex Combessie
Alex Combessie
View post
Giskard LLM scan multi-model
News

[Release notes] LLM app vulnerability scanner for Mistral, OpenAI, Ollama, and Custom Local LLMs

Releasing an upgraded version of Giskard's LLM scan for comprehensive vulnerability assessments of LLM applications. New features include more accurate detectors through optimized prompts and expanded multi-model compatibility supporting OpenAI, Mistral, Ollama, and custom local LLMs. This article also covers an initial setup guide for evaluating LLM apps.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Red Teaming LLM Applications course
News

New course with DeepLearningAI: Red Teaming LLM Applications

Our new course in collaboration with DeepLearningAI team provides training on red teaming techniques for Large Language Model (LLM) and chatbot applications. Through hands-on attacks using prompt injections, you'll learn how to identify vulnerabilities and security failures in LLM systems.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Giskard's LLM Red Teaming
News

LLM Red Teaming: Detect safety & security breaches in your LLM apps

Introducing our LLM Red Teaming service, designed to enhance the safety and security of your LLM applications. Discover how our team of ML Researchers uses red teaming techniques to identify and address LLM vulnerabilities. Our new service focuses on mitigating risks like misinformation and data leaks by developing comprehensive threat models.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Classification of AI systems under the EU AI Act
News

EU AI ACT: 8 Takeaways from the Council's Final Approval

The Council of the EU has recently voted unanimously on the final version of the European AI Act. It’s a significant step forward in its efforts to legislate the first AI law in the world. The Act establishes a regulatory framework for the safe use and development of AI, categorizing AI systems according to their associated risk. In the coming months, the text will enter the last stage of the legislative process, where the European Parliament will have a final vote on the AI Act.

Javier Canales Luna
Javier Canales Luna
View post
Giskard 2023 retrospective
News

Giskard's retrospective of 2023 and a glimpse into what's next for 2024!

2023 retrospective, covering people, company, customers, and product news, also offers a glimpse into what's next for 2024. Our team keeps growing, with new offices in Paris, new customers, and product features. Our GitHub repo has nearly reached 2500 stars, and we were Product of the Day on Product Hunt. All this and more in our 2023 review.

Alex Combessie
Alex Combessie
View post
News

EU AI Act: The EU Strikes a Historic Agreement to Regulate AI

The EU's AI Act establishes rules for AI use and development, focusing on ethical standards and safety. It categorizes AI systems, highlights high-risk uses, and sets compliance requirements. This legislation, a first in global AI governance, signals a shift towards responsible AI innovation in Europe.

Javier Canales Luna
Javier Canales Luna
View post
Biden’s Executive Order to Regulate AI
News

Biden's Executive Order: The Push to Regulate AI in the US

One year after the launch of ChatGPT, regulators worldwide are still figuring out how to regulate Generative AI. The EU is going through intense debates on how to close the so-called 'EU AI Act' after two years of legislative process. At the same time, only one month ago, the White House surprised everyone with a landmark Executive Order to regulate AI in the US. In this article, I delve into the Executive Order and advance some ideas on how it can impact the whole AI regulatory landscape.

Javier Canales Luna
Javier Canales Luna
View post
Giskard’s LLM Testing solution is launching on Product Hunt
News

Our LLM Testing solution is launching on Product Hunt 🚀

We have just launched Giskard v2, extending the testing capabilities of our library and Hub to Large Language Models. Support our launch on Product Hunt and explore our new integrations with Hugging Face, Weights & Biases, MLFlow, and Dagshub. A big thank you to our community for helping us reach over 1900 stars on GitHub.

Blanca Rivera Campos
Blanca Rivera Campos
View post
News

Towards AI Regulation: How Countries are Shaping the Future of Artificial Intelligence

In this article we will present the challenges and approaches to AI Regulation in major jurisdictions such as the European Union, the United States, China, Canada and the UK. Explore the growing impact of AI on society and how AI quality tools like Giskard ensure reliable models and compliance.

Javier Canales Luna
Javier Canales Luna
View post
AI Safety and Security: Insights from Giskard's CPO - Interview with Jean-Marie John-Mathews
News

AI Safety and Security: A Conversation with Giskard's Co-Founder and CPO

Giskard's Co-Founder and CPO, Jean-Marie John-Mathews was recently interviewed by Safety Detectives and he shared insights into the company's mission to advance AI Safety and Quality. In this interview, Jean-Marie explains the strategies, vulnerabilities, and ethical considerations at the forefront of AI technology, as Giskard bridges the gap between AI models and real-world applications.

Angelo Pedraza Sedano
View post
OWASP Top 10 for LLM 2023
News

OWASP Top 10 for LLM 2023: Understanding the Risks of Large Language Models

In this post, we introduce OWASP's first version of the Top 10 for LLM, which identifies critical security risks in modern LLM systems. It covers vulnerabilities like Prompt Injection, Insecure Output Handling, Model Denial of Service, and more. Each vulnerability is explained with examples, prevention tips, attack scenarios, and references. The document serves as a valuable guide for developers and security practitioners to protect LLM-based applications and data from potential attacks.

Matteo Dora - Machine Learning Researcher
Matteo Dora
View post
White House pledge targets AI regulation
News

White House pledge targets AI regulation with Top Tech companies

In a significant move towards AI regulation, President Biden convened a meeting with top tech companies, leading to a White House pledge that emphasizes AI safety and transparency. Companies like Google, Amazon, and OpenAI have committed to pre-release system testing, data transparency, and AI-generated content identification. As tech giants signal their intent, concerns remain regarding the specificity of their commitments.

Blanca Rivera Campos
Blanca Rivera Campos
View post
LLM Scan: Advanced LLM vulnerability detection
News

1,000 GitHub stars, 3M€, and new LLM scan feature  💫

We've reached an impressive milestone of 1,000 GitHub stars and received strategic funding of 3M€ from the French Public Investment Bank and the European Commission. With this funding, we plan to enhance their Giskard platform, aiding companies in meeting upcoming AI regulations and standards. Moreover, we've upgraded our LLM scan feature to detect even more hidden vulnerabilities.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Clément Delangue representing Hugging Face at the US Congress!
News

The Open-Source AI Imperative: Key Takeaways from Hugging Face CEO's Testimony to the US Congress

Explore key insights from Clément Delangue's testimony to the US Congress on Open-Science and Open-Source AI. Understand the importance of Open-Source & Open-Science to democratize AI technology and promote ethical AI development that benefits all.

Alex Combessie
Alex Combessie
View post
Scan your AI model to find vulnerabilities
News

Giskard’s new beta is out! ⭐ Scan your model to detect hidden vulnerabilities

Giskard's new beta release enables to quickly scan your AI model and detect vulnerabilities directly in your notebook. The new beta also includes simple one-line installation, automated test suite generation and execution, improved user experience for collaboration on testing dashboards, and a ready-made test catalog.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Demystifying the EU AI Act news
News

The EU AI Act: What can you expect from the upcoming European regulation of AI?

In light of the widespread and rapid adoption of ChatGPT and other Generative AI models, which have brought new risks, the EU Parliament has accelerated its agenda on AI. The vote that took place on May 11, 2023 represents a significant milestone in the path toward the adoption of the first comprehensive AI regulation.

Javier Canales Luna
Javier Canales Luna
View post
Giskard interview for BFM Business' FocusPME
News

Exclusive Interview: How to eliminate risks of AI incidents in production

During this exclusive interview for BFM Business, Alex Combessie, our CEO and co-founder, spoke about the potential risks of AI for companies and society. As new AI technologies like ChatGPT emerge, concerns about the dangers of untested models have increased. Alex stresses the importance of Responsible AI, which involves identifying ethical biases and preventing errors. He also discusses the future of EU regulations and their potential impact on businesses.

Blanca Rivera Campos
Blanca Rivera Campos
View post
SafeGPT - The safest way to use ChatGPT and other LLMs
News

🔥 The safest way to use ChatGPT... and other LLMs

With Giskard’s SafeGPT you can say goodbye to errors, biases & privacy issues in LLMs. Its features include an easy-to-use browser extension and a monitoring dashboard (for ChatGPT users), and a ready-made and extensible quality assurance platform for debugging any LLM (for LLM developers)

Blanca Rivera Campos
Blanca Rivera Campos
View post
Giskard's turtle slicing some veggies!
News

Giskard 1.4 is out! What's new in this version? ⭐

With Giskard’s new Slice feature, we introduce the possibility to identify business areas in which your AI models underperform. This will make it easier to debug performance biases or identify spurious correlations. We have also added an export/import feature to share your projects, as well as other minor improvements.

Blanca Rivera Campos
Blanca Rivera Campos
View post
Gartner Research
News

Giskard mentioned as a significant vendor in Gartner's Market Guide for AI Trust, Risk and Security Management

AI poses new trust, risk and security management requirements that conventional controls do not address. This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions.

Alex Combessie
Alex Combessie
View post
Giskard 2022 in Review: Agenda
News

Giskard's retrospective of 2022... And a look into what's coming in 2023!

Giskard's retrospective of 2022, covering people, company, customers and product news, and a look into what's next for 2023. 2022 was a pivotal year, as we went from 3 to 10 people, raised our first round, expanded our product and grew our customer base. We share a special announcement, and unveil the key features that will come in Giskard 2.0 this year.

Alex Combessie
Alex Combessie
View post
Our first interview on BFM TV Tech & Co
News

Exclusive interview: our first television appearance on AI risks & security

This interview of Jean-Marie John-Mathews, co-founder of Giskard, discusses the ethical & security concerns of AI. While AI is not a new thing, recent developments like chatGPT bring a leap in performance that require rethinking how AI has been built. We discuss all the fear and fantasy about AI, how it can pose biases and create industrial incidents. Jean-Marie suggests that protection of AI resides in tests and safeguards to ensure responsible AI.

Jean-Marie John-Mathews
Jean-Marie John-Mathews, Ph.D.
View post
Giskard's co-founders: Andrei Avtomonov (left), Jean-Marie John-Mathews (center), Alex Combessie (right)
News

Giskard closes its first financing round to expand Enterprise offering

The funding led by Elaia, with participation from Bessemer Venture Partners and notable angel investors, will accelerate the development of an enterprise-ready platform to help companies test, audit & ensure the quality of AI models.

Alex Combessie
Alex Combessie
View post
Python enjoying cups of Java coffee beans - generated by OpenAI DallE
News

Giskard is coming to your notebook: Python meets Java via gRPC tunnel

With Giskard’s new External ML Worker feature, we introduce a gRPC tunnel to reverse the client-server communication so that data scientists can re-use an existing Python code environment for model execution by Giskard.

Andrei Avtomonov
Andrei Avtomonov
View post
Dollar Planets Generated by OpenAI DALL·E
News

Why do Citibeats & Altaroad Test AI Models? The Business Value of Test-Driven Data Science

Why do great Data Scientists & ML Engineers love writing tests? Two customer case studies on improving model robustness and ensuring AI Ethics.

Alex Combessie
Alex Combessie
View post
Synthwave astronauts polishing the hull of a giant marine turtle in space - Generated by OpenAI DallE
News

Does User Experience Matter to ML Engineers? Giskard Latest Release

What are the preferences of ML Engineers in terms of UX? A summary of key learnings, and how we implemented them in Giskard's latest release.

Alex Combessie
Alex Combessie
View post
Sea Turtle
News

Why & how we decided to change Giskard's identity

We explain why Giskard changed its value proposition, and how we translated it to a new visual identity

Alex Combessie
Alex Combessie
View post
Happy ML Tester
News

Giskard's new feature: Automated Machine Learning Testing

The Open Beta of Giskard's AI Test feature: an automated way to test your ML models and ensure performance, robustness, and ethics

Alex Combessie
Alex Combessie
View post
A billion stars
News

Who cares about AI Quality? Launching our AI Innovator community

The Giskard team explains the undergoing shift toward AI Quality, and how we launched the first community for AI Quality Innovators

Alex Combessie
Alex Combessie
View post
Open Ocean
News

Why & how we decided to make Giskard Open-Source

We explain why the Giskard team decided to go Open-Source, how we launched our first version, and what's next for our Community.

Alex Combessie
Alex Combessie
View post
Happy new year 2022
News

Wishing y’all a happy & healthy 2022! 🎊

The Giskard team wishes you a happy 2022! Here is a summary of what we accomplished in 2021.

Alex Combessie
Alex Combessie
View post