Many experts agree that AI will have the most significant impact on manufacturing. According to McKinsey Research, AI can create $1.2 Trillion to $2 Trillion of value in supply-chain and manufacturing. Manufacturing processes generate enormous amounts of data, involve repetitive tasks, and present multi-dimensional problems beyond the scope of many conventional tools. The industry is also projected to face a workforce shortage due to skilled employees’ looming retirement. AI and Automation are key technologies that can address this gap while increasing operational efficiency, improving quality, and boosting productivity. However, AI has yet to gain significant momentum and reshape manufacturing. Manufacturing executives and plant leaders must overcome several challenges before AI-led digital transformation transitions from a select few to a broad market at scale : Legacy Infrastructure - The production sites typically have a wide variety of machines, tools, and systems that use disparate and often competing technologies. For example, discrete…
In the first part of this blog, Basic Concepts and Techniques of AI Model Transparency, we reviewed a few common techniques for AI model transparency such as linear coefficients, local linear approximation, and permutation importance. In particular, the permutation importance is applicable to any black-box models, any accuracy/error functions, and more robust against high-dimensional data (because it handles each feature one by one rather than all features at the same time). One of the drawbacks of the permutation importance is its high computation cost. We have to repeat the evaluation process by (the number of features) * (the number of random shuffling to repeat) * (the number of models). To reduce the computation time, a common approach is to apply downsampling that works well when the positive and negative classes are balanced. However, such naive downsampling makes permutation importance extremely unreliable. Permutation Importance Under Class Imbalance Let us first see…
Transparency of AI/ML models is a topic as old as AI/ML itself. However, transparency has increasingly become more important due to proliferation of enterprise AI applications, critical breakthroughs in black-box ML modeling (e.g. Deep Learning), and greater concerns with increased personal data being used in AI models. The word “transparency” is often used in different contexts, but generally refers to issues like: Interpretability and Explainability Reproducibility and Traceability Ethic, Trust and Fairness This blog focuses on the most “basic” level of transparency, how to explain the impact of input variables (a.k.a. features) in the final prediction. There are many techniques to evaluate the impact of input features. Below are some common techniques and their advantages and disadvantages. Linear coefficients for Linear Models The simplest (but important) technique is linear coefficients (weights) of features. Fig.1 illustrates the idea of linear coefficients based on a simple two-dimensional example (x1 and x2 are…
What if you could tell that an essential robotic process in your manufacturing line was about to break down? What if your finance department could provide you with a list of customers most likely to default on their payments? What if your marketing department could rank the planned campaigns in their budget based on the likelihood of success? The answer to these, and countless other questions, are at the heart of predictive analytics. As the world of Business Intelligence (BI) continues to evolve, describing "what happened" through dashboards and reports is no longer sufficient. To provide genuine value, modern BI professionals must frequently deliver dashboards and reports that can help line-of-business users make better, smarter decisions faster. Of course, the challenge is that most BI systems do not have predictive analytics capabilities "out of the box," and purpose-built predictive systems are often too limiting or designed for use-cases that are…