Many data science leaders emphasize the importance of placing business criteria alongside the design of a machine learning pipeline. They correctly say that there should be a continuous feedback loop between a machine learning model and the business value it drives. But how exactly is this accomplished in practice? How do data scientists incorporate business knowledge into their technical processes? In this article, I will walk through the key things to keep in mind when designing a data science pipeline. I will demonstrate this process with a case study of user behavior analysis and business process optimization.

The Data Science Pipeline

In a typical data science workflow, there are many stages: ingestion and cleansing of data, feature engineering, feature selection, model selection, model validation and error analysis, and finally a feedback loop back to the beginning. 

The following enterprise data science scenario is common: the data scientist is tasked with determining user engagement across a suite of cloud applications. The hope is to understand how user interactions line up into processes. Engineers can then optimize these business processes to save time and money for our customers.

Perform Feature Engineering with Domain Knowledge

We begin the user behavior analysis with feature engineering. In an organization, this should first require calling a team meeting with all the engineers, PMs, and business leaders to acquire domain knowledge. Ask each of them to come up with a few indicators of your chosen success metric.

The purpose of this brainstorm session is three-fold—generating new, interesting features will increase model accuracy, improve interpretability, and resolve human bias. If a data scientist works in isolation, the features she produces will inevitably introduce some unique thought to the model. One way to mitigate this bias is to of course to use ensemble methods later in the pipeline. This could be thought of as diversity on a technical level. Another solution is to introduce diversity at the human level—simply asking others to contribute to the feature set.

In our process optimization example, let’s imagine there is a particular cloud application for supply chain. Let’s assume we want to predict the number of late shipments for the next month. We have access to the company’s process data and user behavior data. We will take the advice of a supply chain domain expert, who has worked on this application for many years. She explains that there are shipment orders that are tied to certain locations in the country and to certain high-ranking administrators in a company. Based on this knowledge, a data scientist can create interaction factors, multiplying different features together to create a new feature. In this case, the number of shipment orders multiplied by the number of administrators creates a new feature. Another potential feature could be the number of shipment orders multiplied by the number of transport vehicles available per week.

Perform Model Validation Against Edge Cases

Next, assuming we have a model trained on a set of features, I skip over to model validation. It is important to understand the business context of test dataset used to validate the model. How many users does it reflect? What are the characteristics of these users? Does my model’s accuracy change when I switch the data?

Specifically, I will evaluate the model against business edge cases. Does the accuracy go up or down when running the model on a dataset with only new users? If it goes down, I have to rework my features to better capture new user behavior. Similarly, does the accuracy go up or down when running the model against data with lifetime users? A data scientist should use such edge cases to create a more generalizable model.

Collaborate with Domain Experts on Error Analysis

Finally, I turn back to the domain experts for error analysis and the feedback loop. Many experts have a lifetime of knowledge about their product. They need only glance at a red flag to determine exactly the cause of error and how to proceed. Our job as data scientists is to simply show them the box of flags. When we give a prediction to domain experts, it is important that those predictions are in a familiar format for them to absorb.

For example, when a classifier predicts an incorrect label, the domain expert should have easy access to step in and correct the mistake. She can also provide notes as to why this particular data point should be classified as such. These notes will be invaluable to go back and create better features. As a process and technical change, there should be a dashboard that domain experts can log into and see the current predictions made. They can then freely relabel data points and give explanations. This dashboard falls under the technical category of model monitoring or model management.  

In the supply chain application example, a machine learning model may incorrectly predict that a certain shipment arrived late at a facility. However, a domain expert might explain that this delay is part of a specific business process and corresponds to further package handling and security checks. The expert notes that this stage of processing only occurs in certain business units and by employees with a certain access control.

This knowledge is helpful for improving the machine learning model as well as optimizing the process for end-users of the cloud application. First, to improve the model, a data scientist can create new features to account for this special case. Perhaps she will segment the population based on location and business unit to identify these special cases. Next, data scientists can work with engineering leaders and product managers to bring the online user experience as close to these custom business processes as possible. The technical experimentation and machine learning (think A/B Testing or Multi-Armed Bandits) involved to validate these optimized processes are subjects for another article. 

Conclusion

A successful data strategy requires more than just data scientists. There must be open collaboration with engineers, product managers, UX researchers, and business leaders. Product managers ask important questions about user engagements and feature traction, and so we offer them a quantitative exploration of user behavior analysis. To discover why engagement increases or decreases, we turn to UX researchers, who provide a holistic view into the interests of our customers. Finally, we must turn a humble ear to the business leaders and engineers, whose collective domain knowledge can tell you exactly what is happening at an instant—if they are presented with content they are used to. Where we as data scientists then step in is simply closing the gap.

Vikram Reddy
Author
Vikram Reddy

Vikram Reddy is a Senior Data Scientist for Oracle. He holds a master’s degree from UC Berkeley, where he specialized in AI and Recommendation Systems, and a bachelor’s in Applied Math from UCLA, focusing on Numerical Computation and Statistics. Prior to Oracle, he worked as a software developer in IBM’s Cloud Analytics division and as a data analyst at a startup.