Explainable AI: Moving Beyond Intuition

Explainable AI: Moving Beyond Intuition

November 12, 2020

Why companies struggle with AI adoption, and how to change.


The Challenge of Explainability

The rapid growth and adoption of Artificial Intelligence (AI) and Machine Learning (ML) within the enterprise is well known. In fact, according to a 2020 report by O’Reilly Group, 85% of surveyed companies are evaluating or using AI in production. However, the harsh truth is that adopting or evaluating AI vs. benefiting from it can be far different challenges. A 2018 KPMG report found that only 35% of surveyed executives had a high level of trust in how their companies used data, analytics, or AI. A late 2020 report by Wakefield Research went even further and found that nearly 75% of CEOs still make decisions primarily based on gut instinct.

While there are numerous aspects to why organizations and executives struggle with using data for decision-making, in the world of AI, a significant hurdle comes down to one problem: Explainability. ML algorithms have become more complex and sophisticated, the mathematical constructs that underlie them have become increasingly difficult – if not impossible – to explain. While most data scientists could explain the logic behind their AI/ML models, the further extracted the information becomes from the data scientist – and the closer it gets to a business user – the more amplified the problem of lack of transparency becomes. Put in simpler terms; business users don’t have faith in what they don’t understand.

Start With Clear Business Goals

Before delving into AI’s explainability, it’s essential to take a step back and understand the problem from a business perspective. Too often, the adoption of new technology in the enterprise begins as a “technology” initiative rather than a business one. As was the case with the early adoption of decision-support systems, AI is often just an “interesting” technology – a solution in search of a problem. In these situations, lack of explainability becomes an even more significant challenge.

Suppose the goal is to lower the barriers to adoption and acceptance of AI. In that case, businesses must begin with a top-down approach that outlines clear, measurable, quantifiable goals that AI can solve. Maybe it’s identifying customers likely to churn or measuring the accuracy of the monthly sales forecast. Whatever the case may be, businesses must have straightforward ways of measuring not only if the AI solution is doing its job (identifying churning customers) but whether it’s doing a better job than was being performed without AI – and by how much. It’s only with clear, quantifiable goals that enterprises can fully leverage – and benefit – from investments in AI.

Strike a Balance of Accuracy vs. Explainability

No matter the reason and ultimate goal of AI adoption, business leaders must also understand that there are inherent limitations to any technology – and tradeoffs that must be accepted. In the world of AI, explainability (or lack thereof) is one such tradeoff. That’s because algorithms that tend to be highly accurate (like neural networks) are also mathematically complex that they are impossible to interpret. In these situations, it’s crucial for the business to evaluate multiple types of alternative algorithms – in parallel – to evaluate the tradeoff between an easily explainable model vs. the loss in accuracy.

For example, if your new churn model is likely to predict churn with a 98% accuracy rate, but will be ignored due to its lack of transparency, what’s the benefit? Instead, if a slightly less accurate model can predict churn with 96% accuracy but with greater clarity, the potential benefit outweighs the loss in precision. Once again, measurement and comparison become critical. Too often, AI teams get lost in achieving “maximum precision” and fail to consider the day-to-day needs of business users and the ultimate goal of building models: To make life easier for business users.

AutoML 2.0: The Path to Explainable AI

Often, disjointed expectations between technical and business teams can lead to problems. The AI and analytics teams typically focus on model accuracy, whereas the business teams place high importance on metrics such as business insights, financial benefit, and the interpretability of the models produced. This misalignment between the groups results in AI project failures as they are trying to measure completely different metrics.

Also, traditional data science initiatives tend to use black-box models that are hard to interpret, lack accountability, and challenging to scale. ML platforms and data scientists who use the black box approach create complex features based on non-linear mathematical transformations. These features, however, cannot be logically explained. Incorporating these types of features leads to a lack of trust and resistance from business stakeholders and, ultimately, project failure. White-box models (WBMs) provide clear explanations of how they behave, how they produce predictions, and what variables influenced the model. WBMs give a more transparent ‘inner-working’ modeling process and easily interpretable behavior, making them preferred in Enterprise use-cases. In the case of heavily regulated industries such as financial services, insurance, and healthcare, feature explainability is critical. AutoML 2.0 platforms and WBMs empower enterprise model developers, model consumers, and business teams to execute complex data science projects with full confidence and certainty.

The challenge with building “explainable AI” models often comes down to the platform in use. That’s because too often, platforms have the primary goal of making the developer’s life easier. While that needs to be the primary objective, it’s also essential to look for systems that make it easier for developers to explain their work to those that will ultimately benefit the most: the end-users. AutoML 2.0 platforms like dotData Enterprise are designed to provide precise and easily interpretable models of the rules used in any given ML algorithm. Clarity offers business users a more straightforward path to understanding how an AI algorithm works and the relationship between data elements that allowed the algorithm to arrive at a specific use-case. AutoML 2.0 platforms are also ideal for evaluating multiple ML algorithms simultaneously to provide various options to assess the balance between explainability and accuracy. The combination of power and explainability gives both business users and developers the ideal compromise to maximize your AI platform’s return on investment.

Share On

Walter Paliska

Walter Paliska

Walter brings 25+ years of experience in enterprise marketing to dotData. Walter oversees the Marketing organization and is responsible for product marketing and demand generation for dotData. Walter’s background includes experience with both software and hardware companies, and he has worked in seven different startups, including three successful bootstrap startups.