Thought Leadership

Data Science Operationalization: What the heck is it?

Data Science Operationalization: What the heck is it?

April 15, 2020

Data Science Operationalization Defined

Data science operationalization, in concept, is simple enough: Take Machine Learning (ML) or Artificial Intelligence (AI) models and move them into production (or operational) environments. In the words of Gartner Sr. Analyst Peter Krensky, data science operationalization is the “…application and maintenance of predictive and prescriptive models…” In practice, however, operationalizing ML and AI models can be a complicated and often overwhelming challenge. In a broader concept, one of the biggest challenges of operationalization is that AI and ML models get integrated with systems that contain live data that changes quickly. For example, if your model is designed to predict customer churn, your data science operationalization process needs to be integrated with your CRM system to predict churn effectively as your data volumes grow.

What makes data science operationalization so hard?

There are four critical aspects of data science operationalization that make it challenging to implement. First, is the quality of code. Because data scientists use tools like Python and R to develop models, the code is often not of “production quality.” Moving the code to production means that a fair amount of rework has to take place to re-code the models using SQL code that is native to the production database.

Integration viability

The second problem is the integration challenge. Integrating data and scoring pipelines with the multitude of systems that are often associated with data science projects requires a lot of integration work that is time-consuming and highly technical.

Model Monitoring & Maintenance

Even when models are appropriately integrated, they must be maintained. Accuracy of metrics and model prediction accuracy must be continuously monitored, and models need to be adjusted over time as data changes. This process involves retraining models regularly, which is time-consuming and expensive.

Scalability

Data science models often rely on a tiny subset of the full available data set. In a churn model, for example, the models might be developed on less than 40% of the available data, but in production, the models need to scale to process 100% of available customer data to predict churn. Another aspect of scalability is the ability of the server to scale up and down depending on the level of power required. Many customers underestimate the computer power required and have problems when ML models break or fail.

Portability

In most organizations, the data science team uses software tools and configurations that are often markedly different from production environments. That means that taking models developed by data scientists and operationalizing them entails porting the code to platforms and systems not initially taken into account during model development.

See it in Action

See the power of Feature Engineering Automation with a personal demo.

Making Data Science Operationalization More Palatable

The answer to the many challenges of operationalizing AI and ML models is automation. By using API-based integration, AutoML platforms can accelerate AI and ML model development through automation and can alleviate the operationalization headaches associated with moving models into production. By using a standard approach to deployment, using container technology (Docker) will address compatibility and porting challenges.

Want to learn more? Download our complimentary white paper on data science operationalization and learn how you can take the headaches out of your data science process today.

Share On

Walter Paliska

Walter brings 25+ years of experience in enterprise marketing to dotData. Walter oversees the Marketing organization and is responsible for product marketing and demand generation for dotData. Walter’s background includes experience with both software and hardware companies, and he has worked in seven different startups, including three successful bootstrap startups.

Walter Paliska

Walter brings 25+ years of experience in enterprise marketing to dotData. Walter oversees the Marketing organization and is responsible for product marketing and demand generation for dotData. Walter’s background includes experience with both software and hardware companies, and he has worked in seven different startups, including three successful bootstrap startups.

Recent Posts

dotData Insight: Melding the Power of AI-Driven Insight Discovery & Generative AI

Introduction Today, we announced the launch of dotData Insight, a new platform that leverages an…

11 months ago

Boost Time-Series Modeling with Effective Temporal Feature Engineering – Part 3

Introduction Time-series modeling is a statistical technique used to analyze and predict the patterns and…

1 year ago

Practical Guide for Feature Engineering of Time Series Data

Introduction Time series modeling is one of the most impactful machine learning use cases with…

1 year ago

Maintain Model Robustness: Strategies to Combat Feature Drift in Machine Learning

Introduction Building robust and reliable models in machine learning is of utmost importance for assured…

1 year ago

The Hard Truth about Manual Feature Engineering

The past decade has seen rapid adoption of Artificial Intelligence (AI) and Machine Learning (ML)…

1 year ago

Feature Factory: A Paradigm Shift for Enterprise Data

The world of enterprise data applications such as Business Intelligence (BI), Machine Learning (ML), and…

2 years ago