We're excited to announce that Giskard’s open-source library now integrates with LiteLLM, simplifying how you can test and evaluate your LLM agents across different foundation model providers. You can now leverage any LLM provider supported by LiteLLM for testing your agents, from major providers like OpenAI and Anthropic to open-source models like Mistral and custom models.
What is LiteLLM?
LiteLLM is a unified interface that allows developers to interact with various LLM providers using an OpenAI-like format. It supports a wide range of providers including OpenAI, Azure, AWS Bedrock, Anthropic, Mistral, Gemini, and others. LiteLLM handles the complexity of different API formats, providing consistent input/output patterns and managing important operational aspects like retries and fallbacks.
Why integrate Giskard with LiteLLM?
The integration of LiteLLM expands Giskard's capabilities by providing access to a vast ecosystem of LLM providers, bringing several key benefits:
- Comprehensive model support: Test your agents with any major LLM provider - including OpenAI, Anthropic, cloud platforms like AWS Bedrock and Azure OpenAI, local deployments through Ollama, and open-source models like Mistral.
- Simplified custom model integration: Integrate your custom or self-hosted models through LiteLLM's standardized interface, eliminating the need for complex implementation work.
- Reduced maintenance overhead: Benefit from automatic handling of API updates and changes across providers, with consistent error handling and standardized formats.
Enhanced LLM testing capabilities
This integration enhances two core Giskard features that are essential for LLM agent testing:
- LLM Scan is our automated vulnerability assessment tool that combines heuristic-based and LLM-assisted detectors to identify potential issues in your LLM agents.
- RAGET (RAG Evaluation Toolkit) helps you evaluate and improve your RAG systems by generating comprehensive test sets and providing detailed performance analysis.
With LiteLLM integration, both features can now leverage any supported LLM provider for their operations - whether you're running vulnerability assessments with LLM Scan or using RAGET to generate test data and evaluate RAG outputs.
Getting started
To start using Giskard with any LLM provider through LiteLLM integration, follow these steps:
- Install Giskard:
- Configure your LLM provider. For example, with OpenAI:
Now you can use Giskard's testing capabilities with any supported LLM provider. The setup process is similar for other providers like Azure OpenAI, Anthropic, or Mistral - just use the appropriate environment variables and model names.
Conclusion
The Giskard-LiteLLM integration makes it easier to test your LLM agents across multiple providers. Whether you're evaluating different models, building RAG systems, or ensuring the security of your LLM agents, you can now seamlessly work with any provider with minimal setup and uniform testing interfaces.
For detailed setup instructions and provider-specific configurations, visit our documentation. Join our Discord community if you need help or want to share your experience with the new integration.