A lot has been said on how to use data science to leverage and model data in order to generate significant improvements in profit. In fact, we’ve done this for over a dozen business lines at System1. Nonetheless, whenever we begin to apply data science and optimization to a new business area, there is a recurring pattern of challenges that we’ve learned to anticipate and have thankfully overcome. This writeup describes how we communicate our data science process to business stakeholders when tackling a new optimization opportunity.

Getting People Interested

Fortunately, at System1, people have taken notice of our years of significant measurable improvements to the bottom line across the business. These days the conversation usually starts with something like this, “Hey, we have these great new ideas to optimize our business and we know that by applying the data science team smarts we can really make this happen.  Profits will surely shoot through the roof!” In this case, the challenge is helping the business owner define their ideas and how to measure the improvements. Sometimes, this also involves tempering their expectations (more on that later).

However, when our team was new to the business and before we had any big wins under our belt, we had to prove ourselves. This continues to be one of the biggest challenges for the data science team in many organizations. In fact, one of the data science team leader’s most important jobs is to evangelize data science philosophy throughout the business. We found that the key approach here is to educate the stakeholders, be transparent about our planned approach, overly communicate, make sure we can measure the improvement, and start out with a gradual plan to phase in optimization as confidence grows. We’ve found time and time again that it’s hard to gain confidence and easy to lose it. So we always start out carefully, but have a well-defined plan to keep the momentum growing.

Defining and Measuring Business Goals

One of the biggest gotchas we’ve learned to watch out for is making sure we are set up for success by clearly defining the business goals and also understanding how we will measure our progress towards those goals. You need to be able to both define and measure the metrics of success—one without the other is useless. If the goal is to improve, say, the revenue per session count (RPS) then we need to be able to measure both revenue and sessions. In this case, we also need to be able to plan for measuring the confidence interval around the target metric and what kind of improvement in that metric either opens the gates for moving to the next phase or calling it a success.

Another key tool we’ve used in measuring success and obtaining business buy-in is establishing at least one kind of A/B test that compares our new optimization to either a previously manual or “human” model or some kind of randomly established baseline. This helps us answer two types of very important questions. The first is, “How much is your optimization improving the bottom line?” You can’t answer this question with, “Trust me, the math works.” You have to give an easily interpretable way of comparing what the business would have done otherwise without the optimization. The other question is, “Hey, profit went down yesterday, was that due to your optimization failing?” A great way of starting to explain this is being able to see if the profit went down for the manual or baseline model as well. If it did, then it might just be a market shift that impacted all models. Defining these baseline metrics go a long way towards building trust.

Overcoming Negative Perceptions

There are a couple types of negative perceptions that we’ve had to deal with. The first is that a data science approach will take away somebody’s job. The situation might be that the optimization opportunity is to both automate bulk decision making as well as automate a set of rules or to replace a set of manual rules with modeled forecasting decisions. It’s natural for someone who is currently in the position of controlling all these levers to feel that they are being replaced. It’s important to stress that any automation and optimization will only further empower these people to expand their businesses. It’s also important to closely involve them in the process of learning their rules and measuring and comparing results. It turns out that we’ve never replaced anyone’s job and, in our case, once account managers are relieved of their daily repetitive work they find that they can focus on more interesting and creative ways to grow their business.

The other type of perception of data science we need to navigate is that it’s a “black box” of sorts. The term “black box” is a bad word on our team, but one that will pop up now and then.  One of the reasons people might view data science this way is that it comes across as math-y or that it’s such a new field that most people don’t understand how to think about it. What we’ve done to address this perception is sit down with business stakeholders and help them understand exactly how the system works and open up the math and optimization problems we are dealing with in a wholehearted way. Usually all it takes to get people on board is decloaking the process and showing that it’s nothing magical. Sharing the struggles and sometimes very different expectations placed on the shoulders of the data science team also helps enforce the fact that we are all running towards the same goal together.

Setting Realistic Expectations

Speaking of data science not being magic, I think this highlights one of the key understandings in setting realistic expectations. Data science is not magic and it almost always involves a lot of hard work taking place over a significant amount of time to achieve a satisfying win. Some business owners will view data science as a special sauce that you can easily  spread on top of anything to automagically boost its effectiveness or yield. While this is sometimes the case, it’s usually the case that we need to go through the thoroughly defined steps of understanding the business problem, finding and cleaning the data, researching and experimenting with the data to see if there is even a lever we can reliably pull (and this stage can sometimes take an unpredictable amount of time as it is a true exploration into the unknown). Then, and only then, if there is a modelable effect to optimize do we productionize it into a robust and safe system.

If you can get over all these hurdles, you’ll find that after you reach the end of a new optimization product cycle that people will be clamoring to start the next one.

Nathan Janos
Author
Nathan Janos

Nathan Janos is Chief Data Officer at System1. Previously, at Business.com he built SEM optimizations for $100M+ of search engine spend (Business.com sold for $350M). At MarketShare he worked with Fortune 50 companies creating highly customized econometric models. As Convertro's Chief Data Officer he developed their statistical model solutions for cross-channel/cross-device attribution and patented methods in TV attribution (Convertro sold for $100M to AOL). He has a B.S. in C.S. and Engineering from MIT with an emphasis on A.I. and spent three years at the MIT Media Lab. He enjoys sailing around the Channel Islands, fly fishing and swimming.