Startup Fiddler Labs Inc. said today it’s doing more to help companies ensure their artificial intelligence models are trustworthy and responsible.
The company has announced a big upgrade to its Model Performance Management software, with new capabilities that include model ingestion at giga-scale, natural language processing and computer vision monitoring and a more intuitive user experience.
Founded in October 2018, Fiddler Labs has created a platform aimed at boosting visibility into AI, helping companies analyze, manage and deploy machine learning models at scale and protect against issues such as bias and model drift.
Bias and model drift are big problems for AI because they cause models to come to inaccurate conclusions that can have an adverse impact on businesses. What’s more, bias is hard to solve because there are multiple causes of it. For example, it can be caused by insufficient training data, where some demographic groups are absent or underrepresented. A second problem is that everyone carries conscious or unconscious biases, and these can find their way into the training data and be captured by models.
Fiddler Labs attempts to remove bias by proving machine learning models at different granularities in order to understand its true behavior. In this way, it provides model explainability, monitoring and bias detection to help companies understand the reasons why models come to the conclusions they do.
In a recent interview with theCUBE (below), SiliconANGLE Media’s mobile livestreaming studio, Fiddler Chief Executive Krishna Gade explained that Fiddler is attempting to help companies “operationalize AI” in order to make it more reliable. “Without this visibility, you cannot build trust and actually use it in your business,” Gade said. “With Fiddler, what we provide is we actually open up this black box and we help our customers to really understand how those models work.”
Andy Thurai, vice president and principal analyst of Constellation Research Inc., told SiliconANGLE that Fiddler provides an important service because, in order to perform accurately, AI models must first be fine-tuned.
“However, when you fine tune those models using past data sets, any small drift in newer data set can skew the model and make it not only highly inaccurate but land up producing disastrous results such as bad pricing of goods, deploying wrong personnel to front line,” Thurai said. “Model drift can make AI systems completely worthless. On top of that, it is also hard to monitor models that are based on unstructured data such as audio, video, text, etc. Because of that most of the NLP and computer vision models are hard to monitor and measure for drift and accuracy.”
Thurai said that besides drift, another big problem many enterprises struggle with is ethical AI and bias mitigation. Unfortunately, he said, deep-rooted biases, if they go undetected, can create biased models that produce untrustworthy and unethical results.
“Measuring fairness metrics to evaluate, detect, mitigate bias in both training and production data sets and retrain models to be accurate and fair is fundamental for a successful AI program,” Thurai added. “Having non-trustworthy AI will hurt not help any enterprise in the long run.”
Fiddler Labs said today’s updates to the Fiddler MPM platform provide an even deeper understanding of unstructured model behavior and performance, enabling users to discover rarer and more nuanced forms of model drift.
For example, the new NLP and computer vision monitoring capabilities help companies gain deeper insights into more complex models that are trained on unstructured data such as text, images and embeddings. That will allow medical practitioners to achieve greater accuracy when using AI to recognize patterns that could signify illnesses, the company said. Moreover, manufacturer will be alerted when a “defect detection” model changes its behavior — something that could lead to manufacturing defects not being recognized.
Fiddler said the other focus of today’s update is on addressing class imbalance. The platform can help organizations discover highly nuanced model drifts with regard to minority segments, while surfacing “fraud-like use cases” in finance, retail, gaming, manufacturing and education.
For instance, it can teach models to get better at recognizing fraudulent transactions with subtle variations that might cost a gaming company millions of dollars in lost revenue. It can also help companies protect their advertising efforts by detecting higher-than-usual ad click rates, which could signify malicious behavior.
To keep things simple for users, Fiddler said it has revamped its user interface, creating a kind of command center for machine learning operations teams, with visibility into the behavior of each of the AI models being tracked and monitored. Teams now have a single pane of glass to view, prioritize and manage updates, alerts, traffic and drifts, the company said.
Gade said that AI operations can only be successful when companies know their models are resilient in response to shifts in data and not unduly discriminating against certain minority groups. “The ability to understand and explain unstructured data and discover rare but costly model drifts is game changing, and opens up tremendous AI opportunities across a plethora of use cases and a diverse set of industries,” Gade said.
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.