Factoring in the safety and reliability of LLM-based applications is critical as more industries, from healthcare to finance, deploy them in their operations. Having this in mind, we worked on a new integration between Giskard and NVIDIA NeMo Guardrails, an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
LLMs are susceptible to issues that traditional models don't face, such as hallucinations and adversarial attempts including jailbreaks and prompt injections. Developers often struggle with where to start, what issues to focus on, and how to implement effective tests for these risks. Errors in LLMs can lead to legal liability, reputational damage, and costly service disruptions.
Giskard’s integration with NeMo Guardrails equips developers to create more secure and robust LLM-based applications, helping address the need for comprehensive LLM evaluation.
Why integrate with Giskard?
By integrating Giskard for use in NeMo Guardrails, users gain a powerful synergy that helps significantly improve the development and deployment of LLM applications. Developers will be able to:
- Better detect LLM vulnerabilities: Giskard’s scanning capabilities can identify potential weaknesses and failure modes in LLM applications.
- Automate rail generation: The combination of technologies can, based on the vulnerabilities detected by Giskard, automatically generate Colang rules to address the issues.
- Streamline risk mitigation: By combining Giskard’s testing with NeMo Guardrails’ control mechanisms, developers can better anticipate and prevent potential problems before deployment. This helps result in more secure LLM-based applications while simplifying the workflow of identifying issues and implementing protective measures.
How it works
The integration process is straightforward. After running a Giskard scan on an LLM application, users can export the detected vulnerabilities as Colang rules using a simple Python command. These generated rules can then be incorporated into the NeMo Guardrails configuration, providing immediate protection against the identified risks. The integration supports both Colang 1.0 and 2.x versions, offering flexibility for different development environments.
More about the Giskard’s integration with NeMo Guardrails
Giskard’s integration with NVIDIA NeMo Guardrails helps in creating safer, more reliable LLM-based applications. By combining Giskard's advanced testing capabilities with NeMo Guardrails' robust control mechanisms, developers can now more easily identify and mitigate potential risks in their AI systems. We invite developers and organizations to explore this integration and experience the benefits of enhanced LLM application security.
For more details about this integration, read the documentation.
At Giskard we are developing a holistic platform to address LLM risks across quality, security, and compliance domains. Our platform allows AI teams to automate test creation, enabling efficient model validation, detailed reporting, and optimized review procedures.
Reach out to us today to learn more about how we can help you to ensure your LLM-based applications are safe and reliable.