Knowledge
Blog
April 20, 2023
5
mn
read
Blanca Rivera Campos

Exclusive Interview: How to eliminate risks of AI incidents in production

During this exclusive interview for BFM Business, Alex Combessie, our CEO and co-founder, spoke about the potential risks of AI for companies and society. As new AI technologies like ChatGPT emerge, concerns about the dangers of untested models have increased. Alex stresses the importance of Responsible AI, which involves identifying ethical biases and preventing errors. He also discusses the future of EU regulations and their potential impact on businesses.
Giskard interview for BFM Business' FocusPME

🔬 Managing AI Risks in MLOps: Insights from CEO Alex Combessie

Full video

Pro-tip: you can translate the video into your language with this YouTube feature to auto-translate closed captions.

Ensuring quality in AI models and mitigating risks with Giskard MLOps platform

🇺🇸🇬🇧 English transcript

VT: Vincent Touraine, journalist at BFM Business TV, host of the FocusPME show.

AC: Alex Combessie, co-founder and CEO of Giskard.

VT: Ensuring the quality of Artificial Intelligence models by eliminating biases that can penalize organizations, that’s the mission of Alex Combessie, hello!

AC: Hello

VT: You are the CEO and co-founder of Giskard, G-I-S-K-A-R-D

AC: Exactly

VT: Thank you for being with us at Focus PME. So, we’re going to talk about Giskard, your company, your startup. First, where did the idea come from?

AC: So, the idea of Giskard came firstly from our experience. We’re three co-founders and we’ve been working for 10 years as engineers/data scientists, and we develop systems based on Artificial Intelligence for large enterprises. And we realized around three years ago that there’s a real acceleration in AI, but that there’s a real need for an AI that is responsible and regulated. Because now AI is everywhere, and it can pose real problems and risks for companies.

VT: We’re seeing this right now with ChatGPT, absolutely

AC: Exactly

VT: It’s just the tip of the iceberg

VT: What does Giskard propose as a solution exactly?

AC: What we have is a software to ensure the quality of algorithms, specifically to measure and mitigate ethical problems of algorithms, trust issues and false information, and to avoid these issues happening to algorithms that are used on the general public

VT: Internally do you call yourselves Giskardians?

AC: We’re Giskardians, exactly

VT: What are the risks of Artificial Intelligence

AC: So, there are multiple risks. First, we’ll talk about risks to society and ethical risks, AI that is used in public services, financial services that can either create or reinforce certain discriminations against certain groups. It can lead to exposing companies to considerable reputational damages. Secondly, the trustworthiness of models. We sometimes talk about the robustness of algorithms, or more simply algorithms that make mistakes. And algorithms that make errors, algorithms that aren’t more perfect than humans, can have big problems when we think of Artificial Intelligence in industrial applications, or the medical industry. There are economic implications that can be considerable

VT: So, what is the result of using Giskard?

AC: We help companies to test their models before going into production. Testing is something we know, it’s normal for all industries, and we help to apply it to Artificial Intelligence. The result for businesses is reducing their risks, because for them there can be real regulatory issues, and that soon in Europe noncompliance will expose companies to fines of 6% of their global revenue.

VT: There’s not much talk about this, but yes 6%

AC: We don’t talk about it enough, but a company that has 10 billion in revenue would have 600 million euros in damages and potential risks, that we help to avoid.

VT: We’ll talk about your development, you’re a young company, a startup created in 2021 based in Paris, with 12 employees. What are you working on right now to prepare for the future?

AC: Our goal is to be able to test all types of Artificial Intelligence, so exactly like you mentioned ChatGPT and other conversational models, how to ensure their trustworthiness and avoid errors and biases is something we’re currently working on.

VT: And we’ll certainly follow Giskard, it’s a name you won't forget. Thank you very much Alex Combessie for being with us at Focus PME, CEO and co-founder of Giskard. Goodbye

AC: Thank you, goodbye

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a demo

You will also like

SafeGPT - The safest way to use ChatGPT and other LLMs

🔥 The safest way to use ChatGPT... and other LLMs

With Giskard’s SafeGPT you can say goodbye to errors, biases & privacy issues in LLMs. Its features include an easy-to-use browser extension and a monitoring dashboard (for ChatGPT users), and a ready-made and extensible quality assurance platform for debugging any LLM (for LLM developers)

View post
Gartner Research

Giskard mentioned as a significant vendor in Gartner's Market Guide for AI Trust, Risk and Security Management

AI poses new trust, risk and security management requirements that conventional controls do not address. This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions.

View post
Giskard's co-founders: Andrei Avtomonov (left), Jean-Marie John-Mathews (center), Alex Combessie (right)

Giskard closes its first financing round to expand Enterprise offering

The funding led by Elaia, with participation from Bessemer Venture Partners and notable angel investors, will accelerate the development of an enterprise-ready platform to help companies test, audit & ensure the quality of AI models.

View post
Stay updated with
the Giskard Newsletter