It is also about the need to reduce risks. ⛑
The last ten years have seen an explosive growth of AI everywhere. We rely on AI for critical parts of our lives: managing our finances, social interactions, health, even driving our car.
But no technological innovation, even AI, comes without a dark side. 🌑
Two years ago, a team of independent researchers and citizens, Partnership on AI, started to document incidents caused by faulty AI models.
This AI Incident Database now contains over 1200 reports. It is collaborative, searchable, and open-source. It encompasses multiple types of incidents: ethical, technical, environmental, etc. 🪲
You will not be surprised to learn that most reports concern AI models made by the GAFAM. They are most advanced with AI deployments and most exposed to the public eye.
If these companies with large teams of ML engineers can still be exposed to such risks, how about the rest of us?