G
News
June 7, 2024

Partnership announcement: Bringing Giskard LLM evaluation to Databricks

Giskard has integrated with Databricks MLflow to enhance LLM testing and deployment. This collaboration allows AI teams to automatically identify vulnerabilities, generate domain-specific tests, and log comprehensive reports directly into MLflow. The integration aims to streamline the development of secure, reliable, and compliant LLM applications, addressing key risks like prompt injection, hallucinations, and unintended data disclosures.

Giskard + Databricks integration
Alex Combessie
Giskard + Databricks integration
Giskard + Databricks integration

At Giskard, we are committed to helping enterprises safely develop and deploy Large Language Models (LLMs) at scale. That's why we're excited to announce our integration with MLflow, an open-source platform developed by Databricks for managing end-to-end machine learning workflows.

As companies increasingly adopt LLMs, particularly for applications like customer support chatbots, knowledge bases and question-answering with Retrieval Augmented Generation (RAG), it's crucial to comprehensively evaluate model outputs. According to OWASP, key vulnerabilities that must be addressed include prompt injection attacks, hallucinations and misinformation, unintended data disclosures, and generation of harmful or unethical content.

These issues can lead to regulatory penalties, reputational damage, ethical missteps, and erosion of public trust in AI systems. The Giskard-MLflow integration creates an ideal solution for building secure and compliant LLM applications while mitigating risks.

Giskard - Databricks MLFlow workflow
“By combining Giskard's open-source LLM evaluation capabilities with Databricks MLflow's model management features, we're making it easier for AI teams to incorporate comprehensive testing into their ML pipelines, and deploy LLM applications with confidence”, said Alex Combessie, CEO and co-founder of Giskard.

With this integration, AI teams can now incorporate Giskard Open-Source Scan feature to ensure the automatic identification of vulnerabilities on ML models and LLMs, instantaneously generate domain-specific tests, and leverage the Quality Assurance best practices of the open-source community.

Giskard's vulnerability reports, and metrics are automatically logged into Databricks MLflow, and AI teams can easily compare model performance across different versions and experiments. This allows the comparison of the issues detected in model versions, and provides a set of vulnerability reports that describe the source and reasoning behind these issues with examples. With the Giskard and Databricks partnership, you can ensure your models are safe, reliable, and compliant from development to deployment.

LLM Scan results

At Giskard, our goal is to build a holistic platform that covers all risks of AI models for quality, security and compliance. Our solution helps AI teams to automatically create tests, allowing them to efficiently validate models, generate comprehensive reports, and streamline review processes.

Reach out to us today to learn more about how we can help you make the most of your AI investments.

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

Partnership announcement: Bringing Giskard LLM evaluation to Databricks

Giskard has integrated with Databricks MLflow to enhance LLM testing and deployment. This collaboration allows AI teams to automatically identify vulnerabilities, generate domain-specific tests, and log comprehensive reports directly into MLflow. The integration aims to streamline the development of secure, reliable, and compliant LLM applications, addressing key risks like prompt injection, hallucinations, and unintended data disclosures.

At Giskard, we are committed to helping enterprises safely develop and deploy Large Language Models (LLMs) at scale. That's why we're excited to announce our integration with MLflow, an open-source platform developed by Databricks for managing end-to-end machine learning workflows.

As companies increasingly adopt LLMs, particularly for applications like customer support chatbots, knowledge bases and question-answering with Retrieval Augmented Generation (RAG), it's crucial to comprehensively evaluate model outputs. According to OWASP, key vulnerabilities that must be addressed include prompt injection attacks, hallucinations and misinformation, unintended data disclosures, and generation of harmful or unethical content.

These issues can lead to regulatory penalties, reputational damage, ethical missteps, and erosion of public trust in AI systems. The Giskard-MLflow integration creates an ideal solution for building secure and compliant LLM applications while mitigating risks.

Giskard - Databricks MLFlow workflow
“By combining Giskard's open-source LLM evaluation capabilities with Databricks MLflow's model management features, we're making it easier for AI teams to incorporate comprehensive testing into their ML pipelines, and deploy LLM applications with confidence”, said Alex Combessie, CEO and co-founder of Giskard.

With this integration, AI teams can now incorporate Giskard Open-Source Scan feature to ensure the automatic identification of vulnerabilities on ML models and LLMs, instantaneously generate domain-specific tests, and leverage the Quality Assurance best practices of the open-source community.

Giskard's vulnerability reports, and metrics are automatically logged into Databricks MLflow, and AI teams can easily compare model performance across different versions and experiments. This allows the comparison of the issues detected in model versions, and provides a set of vulnerability reports that describe the source and reasoning behind these issues with examples. With the Giskard and Databricks partnership, you can ensure your models are safe, reliable, and compliant from development to deployment.

LLM Scan results

At Giskard, our goal is to build a holistic platform that covers all risks of AI models for quality, security and compliance. Our solution helps AI teams to automatically create tests, allowing them to efficiently validate models, generate comprehensive reports, and streamline review processes.

Reach out to us today to learn more about how we can help you make the most of your AI investments.

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.