fbpx
Types of Temporal Data - Feature Engineering

Preventing Data Leakage in Feature Engineering: Strategies and Solutions

April 26, 2023

Data leakage is a widespread and critical issue that can undermine the reliability of features. In this blog, we will delve into the concept of data leakage, examine how it can transpire during feature engineering, and present various strategies to prevent or mitigate its consequences.

Understanding Data Leakage 

Data leakage occurs when the feature engineering process unintentionally uses information from the target variable or the validation/test set. This can lead to overly optimistic performance metrics, as the feature appears to perform exceptionally well on the test set. However, when the feature is implemented in real-world applications, its performance is often significantly worse than anticipated.

Data Leakage in Feature Engineering

Feature engineering is the process of creating new features or transforming existing ones, and it can frequently be a source of data leakage if not managed carefully.

Statistical Value Leakage

Statistical value leakage arises when you create or transform features before dividing the dataset into training, validation, and test sets. This can result in the engineered features containing information from the validation/test set, causing data leakage and overfitting. Here are a few examples to demonstrate this issue:

Example 1: Scaling Data

Suppose you have a dataset containing features with varying scales, and you decide to standardize the data by subtracting the mean and dividing by the standard deviation of each feature. If you perform this operation before splitting the data, the mean and standard deviation used for scaling will be computed using the entire dataset, including the validation/test set. Consequently, when you split the data, the validation/test set will contain information from the training set through the scaling process, leading to data leakage.

Example 2: One-Hot Encoding

Imagine you have a dataset with a categorical feature that needs to be one-hot encoded for a machine learning algorithm. If you one-hot encode the entire dataset before splitting, you risk introducing data leakage, as the encoding process may include information from the validation/test set.

Statistical value leakage also occurs when the dataset is ordered not randomly but according to specific information, such as the target variable. In this case, the sample ID exhibits a strong artificial correlation with the target variable, making it an ostensibly powerful yet practically meaningless feature.

Temporal Leakage

Temporal leakage transpires when you create or modify features for time-series data using future information that would not be available when making real-time predictions. Here are a few examples to illustrate temporal leakage in feature engineering:

Example 3: Rolling Window Features

Suppose you want to predict stock prices based on historical data and decide to create a feature representing the rolling average of the last five days’ closing prices. If you calculate this feature using future values (e.g., for a given day, you calculate the average using the next five days’ closing prices), you introduce temporal leakage. The model will learn from information that would not be available when making real-time predictions.

Example 4: Predicting Sales

Imagine you want to predict the sales of a product for the next month based on historical sales data. You decide to create a feature that measures the average sales in the last three months. If you include the sales data from the future (i.e., the month you’re trying to predict) while calculating this feature, you introduce temporal leakage. The model will learn patterns based on information that would not be accessible when predicting future sales.

Context Leakage

Context leakage arises when you create or modify features using data that is highly correlated with your target but holds no practical significance from an application perspective. Here’s an example to illustrate context leakage in feature engineering:

Example 5: E-commerce Customer Behavior

Suppose you want to identify features that are useful for predicting whether a user will purchase Product-A. You create features corresponding to whether a user views a certain page within 30 seconds before making the purchase. Naturally, the Product-A page is highly correlated with your target (Product-A purchase). However, this feature has no practical meaning from an application perspective, as it is evident that a user viewing the Product-A page will likely purchase Product-A.

Preventing or Mitigating Data Leakage in Feature Engineering

Apply Context-aware Data Splitting and Cross-Validation

One example of context-aware data splitting is time-based cross-validation, also known as temporal cross-validation or rolling window cross-validation. It is a technique used to evaluate the performance of time-series models while preserving the temporal order of the data. It helps prevent data leakage issues by ensuring that the model is only trained on past data and evaluated on future data, reflecting a more realistic scenario for time-series predictions.

Time-based cross-validation divides the dataset into multiple non-overlapping time periods or windows. The model is trained on the initial window and validated on the next window. The process is then repeated by rolling the windows forward in time, with the model being retrained on the combined data from the previous windows and validated on the next window. This continues until all windows have been used for validation.

Manage Temporal Lead-Time

In time-aware feature engineering, it is essential to carefully consider the “Prediction Execution Time” (when you perform the prediction task), the “Prediction Target Time” (the time point of interest), and data availability.

For example, let’s say you want to predict customer churn 30 days in advance. In this case, you would set the prediction execution time to 30 days prior to the prediction target time. Suppose your data table is created and updated by a batch process that runs once weekly. This means that at the prediction execution time, the most recent seven days of records may not be available in the table.

It is crucial to properly manage the relationship between these time points to avoid temporal leakage issues. Ensuring that your feature engineering and model training process only use data available at the prediction execution time will help prevent the model from learning patterns based on future information, leading to more accurate and generalizable predictions.

Detect Leaky Features

Developing a technique to uncover leaky features is essential in averting potential data leakage, notably when handling a vast number of features. One basic method is to analyze the correlation between each feature and the target variable. If a feature demonstrates an unusually strong correlation with the target variable, it might point to data leakage.

However, this approach is not as effective in identifying partial data leaks, which could be present in only a small fraction of the dataset’s samples. For example, let’s say you have a mobile marketing campaign with a column that records the duration of each customer’s last call in the customer master data. This information may have a partial correlation with the target.

A call duration shorter than 30 seconds usually suggests that the customer has not accepted the offer, probably because additional arrangements are needed during the call. On the other hand, the connection between duration and the target is much less pronounced for longer calls, so statistical methods comparing the entire distribution of duration and target (e.g., Pearson or Kendall correlation) might not capture this partial relationship.

This data might lead to leakage in the following manner: Imagine you trained the models on data up to February 1st, 2023, and you only have access to the most recent snapshot of the customer_table (user_id, last_call_duration) as of March 1st, 2023. In this situation, the duration of calls that took place in February 2023 would be leaked, causing an overestimation of accuracy.

One approach to detect leaks of this nature is to divide the samples based on feature values and inspect the target ratio within each segment. Displaying this data in a histogram can be beneficial:

In this graph, the x-axis denotes the values of last_call_duration. The histogram bars represent the number of samples, and the line displays the target success ratio. By examining this visualization, you can discern that the success ratio nears zero for short calls across numerous samples. This information prompts us to reassess how this data was gathered.

Conclusion

Data leakage is a critical issue in predictive analytics that can lead to erroneous decisions based on overestimated accuracy. To prevent this, you need to carefully design experiments and interpret the meaning of features while understanding the context and managing temporal lead times on data.

dotData’s feature discovery engine automatically manages temporal lead time and explores features without leaking information while also detecting potential leaky features. By doing so, dotData helps ensure that your machine-learning models are built on accurate and reliable data, ultimately improving their performance and reducing the risk of making incorrect predictions.

Learn more about how your organization could benefit from the powerful features of dotData by signing up for a demo.

Share On

Yukitaka Kusumura

Yukitaka Kusumura

Yukitaka is the principal research engineer and a co-founder of dotData, where he leads the R&D of AI-powered feature engineering technology. He has over ten years of experience in research related to data science, including machine learning, natural language processing, and big data engineering. Prior to joining dotData, Yukitaka was a principal researcher at NEC Corporation. He led the invention of cutting-edge technologies related to automated feature engineering from various data sources and worked with clients as a data science practitioner. Yukitaka received his Ph.D. degree in Engineering from Osaka University.