Fiddler AI
California
United States
Overview
AI models are a bit like representative democracy. With limited time and, sometimes, expertise, we place our faith in other entities to make decisions on our behalf. In the case of AI, that’s everything from language translation and financial fraud detection to disease diagnosis and the steering of self-driving cars. Where both politicians and AI potentially fall down is the issue of trust. If we no longer believe that decisions are being made fairly, consistently, and accurately, the benefit of having an external decision-maker becomes more liability than asset.
This is where explainable AI enters the picture. As miraculous as machine learning models can seem, they also remain “black boxed” and inscrutable. That means that, while deep neural networks are recognizable approximations of the way that the human brain works, we can’t (or haven’t previously been able to) unpack exactly how artificial neurons reach their final conclusions. Computer scientists can say whether a model works (for example, can it pick every picture of a dog out of an image set of miscellaneous animal pictures), but not how it works (meaning that we don’t know exactly what it’s singling out when defining a dog).
Understanding this secretive middle bit between inputs (data) and outputs (answers) is important. And it’s a problem that Fiddler AI is working hard to solve—developing tools that answer the all-important “how” and “why” questions often so opaque in AI decision-making. That includes features that let you see how particular data regions are affecting machine learning models and then make tweaks to either minimize or maximize their overall influence.
It’s an area that has crucial implications for everything from ethical concerns about fairness and bias to the more bottom line-oriented issue of quickly alerting engineers when their machine learning models suffer degraded performance. As AI becomes less of a novelty and more an expected part of our lives, the ability of companies to satisfy both regulators and customers by being able to understand—and explain—each prediction made by models will grow increasingly important. Will Fiddler manage to become the dominant player in this field? That much is still unclear. However, it’s certainly helping tackle a problem we’re only going to hear more about in the years to come.
FIND OUT MORE
This is where explainable AI enters the picture. As miraculous as machine learning models can seem, they also remain “black boxed” and inscrutable. That means that, while deep neural networks are recognizable approximations of the way that the human brain works, we can’t (or haven’t previously been able to) unpack exactly how artificial neurons reach their final conclusions. Computer scientists can say whether a model works (for example, can it pick every picture of a dog out of an image set of miscellaneous animal pictures), but not how it works (meaning that we don’t know exactly what it’s singling out when defining a dog).
Understanding this secretive middle bit between inputs (data) and outputs (answers) is important. And it’s a problem that Fiddler AI is working hard to solve—developing tools that answer the all-important “how” and “why” questions often so opaque in AI decision-making. That includes features that let you see how particular data regions are affecting machine learning models and then make tweaks to either minimize or maximize their overall influence.
It’s an area that has crucial implications for everything from ethical concerns about fairness and bias to the more bottom line-oriented issue of quickly alerting engineers when their machine learning models suffer degraded performance. As AI becomes less of a novelty and more an expected part of our lives, the ability of companies to satisfy both regulators and customers by being able to understand—and explain—each prediction made by models will grow increasingly important. Will Fiddler manage to become the dominant player in this field? That much is still unclear. However, it’s certainly helping tackle a problem we’re only going to hear more about in the years to come.