Klu raises $1.7M to empower AI Teams  

Concept Drift

by Stephen M. Walker II, Co-Founder / CEO

What is concept drift?

Concept drift, also known as drift, is a phenomenon in predictive analytics, data science, and machine learning where the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This evolution of data can invalidate the data model, causing the predictions to become less accurate as time passes.

Concept drift can occur due to various reasons. For instance, it can be caused by changes in customer preferences due to external events like a pandemic, or by launching a product in a new market. It can also occur due to problems with data integrity, such as incorrect data collection or issues with the data source. Other causes include evolving user preferences, changes in external factors like economic shifts or regulatory changes, seasonal or temporal variations, changes in data collection methods or measurement techniques, and shifts in data sources.

Concept drift can manifest in different ways:

  1. Sudden Drift — This happens when there are abrupt changes in the concept of the model. For example, the worldwide lockdowns due to the COVID-19 pandemic abruptly changed population behavior.

  2. Incremental/Gradual Drift — The transition between concepts occurs over time, as the new concept emerges and develops.

  3. Recurring Drift — This is a cyclical or seasonal change that repeats over time. For example, shopping behavior changes seasonally, with higher sales in the winter holiday season than during the summer.

Detecting and addressing concept drift is crucial for maintaining the accuracy and effectiveness of predictive models. Strategies for managing concept drift include periodic retraining or refreshing of the model, monitoring the model against a curated data set, and using unsupervised learning techniques for concept drift detection.

What are some techniques to detect and address concept drift?

Concept drift refers to the phenomenon where the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This can cause a decrease in model performance. Detecting and addressing concept drift is crucial for maintaining the accuracy of machine learning models, especially those deployed in dynamic environments.

Techniques to Detect Concept Drift

  1. Monitoring Model Metrics — Model quality metrics such as accuracy, precision, recall, or F1-score for classification problems can be tracked. A significant drop in these metrics over time can indicate the presence of concept drift.

  2. Monitoring Changes in Correlations — Changes in the correlations between model features and predictions, and pairwise features correlations can be monitored. Significant changes in these correlations might signal a pattern shift.

  3. Drift Monitoring and Detection — Monitoring the predictive performance of a model can be used to ensure a model’s continued effectiveness.

  4. Unsupervised Drift Detection — Unsupervised drift detectors focus on finding changes without any label information. These methods are based on the use of statistical hypothesis tests performed on two groups of data, one drawn from the past in a reference window and another with the most recent data in a sliding window.

  5. Entropy of Classifier's Predictions — Concept drift can be detected by performing a Kolmogorov-Smirnov test on the entropy of a classifier's predictions.

Techniques to Address Concept Drift

  1. Regular Retraining — Regularly updating a model can arm it against the threat of decreased model performance. The frequency of retraining is decided based on the stability of the environment in which the model is deployed.

  2. Batch or Offline Learning — If a concept drift has occurred and the old dataset does not reflect the new environment, it is better to replace the entire dataset. This is called batch or offline learning.

  3. Online Learning — If there's a constant stream of new training data, you can leverage online learning which involves continuously retraining the model by setting a time window for the new data.

  4. Dropping Features — If some features aren't working, you may remove some of them and conduct A/B testing.

  5. Addressing Missing Values and Outliers — Working with missing values, outliers, label encoding, and other difficulties can help prevent drift.

  6. Unsupervised Unlearning with Autoencoders — An alternative approach to address concept drift is to "unlearn" the concept drift without having to retrain or adapt any of the models. This approach is based on autoencoders.

The choice of detection and addressing techniques depends on the specific characteristics of your data and the nature of the concept drift. It's also important to note that retraining isn't always the solution, especially when the changes pointed to by concept drift are extreme. In such cases, other approaches like improved feature engineering might be more effective.

When does concept drift happen?

Concept drift, a common occurrence in predictive analytics, data science, and machine learning, happens when the statistical properties of the target variable unpredictably change over time, leading to a decrease in model accuracy. This phenomenon is particularly prevalent in human activity-related processes, such as socioeconomic and biological processes. For example, in merchandise sales, concept drift may occur due to seasonality or sudden global events like the COVID-19 pandemic, which drastically altered population behavior.

The types of concept drift are categorized based on the nature of the changes. Sudden drift refers to abrupt changes in the model's concept, while incremental or gradual drift refers to a slow transition between concepts over time.

Concept drift is a significant consideration in machine learning and data mining, especially in dynamic real-world environments where the relationships between input and output data can shift. For instance, as customer behavior evolves over time, a model trained on historic customer datasets may lose accuracy.

Detecting concept drift is a complex task often intertwined with other machine learning challenges. The best detection method depends on the specific application's error tolerance. Once detected, concept drift can be addressed through online learning, which updates the model in real-time, or through periodic retraining of the model.

What causes concept drift?

Concept drift, a significant consideration in machine learning and predictive analytics, occurs when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This can lead to a decrease in the model's predictive accuracy over time.

Several factors can cause concept drift:

  1. Changes in Data Distribution — External activities can lead to changes in the data distribution.

  2. Shift in Input Data — Changes in customer preferences due to events like a pandemic or launching a product in a new market can cause a shift in input data.

  3. Data Integrity Issues — Problems with data integrity, such as incorrect data collection or issues with the data source, can lead to concept drift.

  4. Evolving User Preferences — Changes in user preferences and behaviors can lead to concept drift, as the relationships between features and user preferences may change over time.

  5. Changes in External Factors — External factors and events, such as economic shifts or regulatory changes, can alter the relationships within the data and cause concept drift.

  6. Seasonal or Temporal Variations — Seasonal trends and recurring patterns can cause concept drift as the relationships between features change over time.

  7. Drift in Data Generation Process — Changes in data collection methods or measurement techniques can introduce differences in data, leading to concept drift.

  8. Shifts in Data Sources — Concept drift can occur when new data sources introduce different perspectives or biases, impacting the relationships between features and the target variable.

  9. Changing Real World Conditions — Changes in real-world conditions, known or unknown, can cause concept drift.

Concept drift can occur suddenly due to unforeseen circumstances or gradually over time. It's crucial to monitor and address concept drift to ensure the long-term accuracy of machine learning models.

More terms

Abductive Reasoning

Abductive reasoning is a form of logical inference that focuses on forming the most likely conclusions based on the available information. It was popularized by American philosopher Charles Sanders Peirce in the late 19th century. Unlike deductive reasoning, which guarantees a true conclusion if the premises are true, abductive reasoning only yields a plausible conclusion but does not definitively verify it. This is because the information available may not be complete, and therefore, there is no guarantee that the conclusion reached is the right one.

Read more

What is brute-force search?

Brute-force search, also known as exhaustive search or generate and test, is a general problem-solving technique and algorithmic paradigm that systematically enumerates all possible candidates for a solution and checks each one for validity. This approach is straightforward and relies on sheer computing power to solve a problem.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free