Giskard has integrated with Databricks MLflow to enhance LLM testing and deployment. This collaboration allows AI teams to automatically identify vulnerabilities, generate domain-specific tests, and log comprehensive reports directly into MLflow. The integration aims to streamline the development of secure, reliable, and compliant LLM applications, addressing key risks like prompt injection, hallucinations, and unintended data disclosures.
2023 retrospective, covering people, company, customers, and product news, also offers a glimpse into what's next for 2024. Our team keeps growing, with new offices in Paris, new customers, and product features. Our GitHub repo has nearly reached 2500 stars, and we were Product of the Day on Product Hunt. All this and more in our 2023 review.
Explore key insights from Clément Delangue's testimony to the US Congress on Open-Science and Open-Source AI. Understand the importance of Open-Source & Open-Science to democratize AI technology and promote ethical AI development that benefits all.
AI poses new trust, risk and security management requirements that conventional controls do not address. This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions.
In this talk, we explain why testing ML models is an important and difficult problem. Then we explain, using concrete examples, how Giskard helps ML Engineers deploy their AI systems into production safely by (1) designing fairness & robustness tests and (2) integrating them in a CI/CD pipeline.
Giskard's retrospective of 2022, covering people, company, customers and product news, and a look into what's next for 2023. 2022 was a pivotal year, as we went from 3 to 10 people, raised our first round, expanded our product and grew our customer base. We share a special announcement, and unveil the key features that will come in Giskard 2.0 this year.
The funding led by Elaia, with participation from Bessemer Venture Partners and notable angel investors, will accelerate the development of an enterprise-ready platform to help companies test, audit & ensure the quality of AI models.
Why do great Data Scientists & ML Engineers love writing tests? Two customer case studies on improving model robustness and ensuring AI Ethics.
What are the preferences of ML Engineers in terms of UX? A summary of key learnings, and how we implemented them in Giskard's latest release.
We explain why Giskard changed its value proposition, and how we translated it to a new visual identity
The Open Beta of Giskard's AI Test feature: an automated way to test your ML models and ensure performance, robustness, and ethics
The Giskard team explains the undergoing shift toward AI Quality, and how we launched the first community for AI Quality Innovators
We explain why the Giskard team decided to go Open-Source, how we launched our first version, and what's next for our Community.
The Giskard team wishes you a happy 2022! Here is a summary of what we accomplished in 2021.
Understand why Quality Assurance for AI is the need of the hour. Gain competitive advantage from your technological investments in ML systems.
Monitoring is just a tool: necessary but not sufficient. You need people committed to AI maintenance, processes & tools in case things break down.
Biases in AI / ML algorithms are avoidable. Regulation will push companies to invest in mitigation strategies.
Find out more about Giskard founders story
Technological innovation such as AI / ML comes with risks. Giskard aims to reduce it.
Giskard supports quality standards for AI / ML models. Now is the time to adopt them!
AI used in recommender systems is posing a serious issue for the media industry and our society
It is difficult to create interfaces to AI models Even AIs made by tech giants have bugs. With Giskard AI, we want to make it easy to create interfaces for humans to inspect AI models. 🕵️ Do you think interfaces are valuable? If so, what kinds of interfaces do you like?
The ML Test Score include verification tests among 4 categories: Features and Data, Model Development, Infrastructure and Monitoring Tests