One of the most challenging aspects of leading a data science team is gaining and maintaining stakeholder buy-in on a project. Developing the skills and process to do this well is critical to your success in this role.

In this article, I will go over three ways to maximize the chances of turning your stakeholders into happy customers and project champions.

1. Bring out your inner salesperson.

At its core, getting stakeholder buy-in on an analytical product is a sales job. As a data science leader, you are selling a stakeholder on an analytical solution to their business problem. Ultimately, you need to get him or her to trust that your solution is the right one and that you are the person to deliver it. Doing this effectively means cultivating the skills and qualities that make a good salesperson successful, part of which is confidence and charisma.

Salespeople have an energy that simultaneously makes people want to work with them and inspires potential customers to partake in their vision of a better world (by buying their product). However, not everyone is an extrovert and many struggle with doing this effectively. The good news is that this is a muscle that can be developed. Think about how your favorite game show host projects their voice and manages the crowd and contestants. It is unlikely that they act the same way around the dinner table. They have cultivated a persona for the show. You too should develop your “game show host” persona. One excellent way to do this practically is to take public speaking, improvisation, and even acting classes.

Excellent salespeople also have great empathy. They need to intimately understand what their customers want. Doing this successfully requires the ability to think like your customer. You should treat your stakeholders in the same way. Know their business and process inside and out and, by extension, their problems as well. Many data scientists struggle with this because they are more excited by the solution than the problem. However, in the end, your stakeholder does not care about the latest version of XGBoost—they care about a more productive sales pipeline or higher click-through rate on their targeted emails. Quite often, getting better at this is less about developing a new skill and more about a shift in attitude and perspective. A great way to do this is to spend as much time as possible with the stakeholder discussing their job (or shadowing them, if they allow it).

2. Set clear project goals with the stakeholder—then challenge them.

Any project that starts without clear goals will ultimately get derailed. Due to their inherent complexity, this is particularly important for data science projects. Align on what problems you are trying to solve up front and map these goals to tangible metrics as soon as possible. These metrics then become the yardstick along which progress is measured. Once you settle on the metrics, it is critical to then align on what “good enough” looks like.

Often in a data science project, there is a level of accuracy beyond which the business value of incremental improvements is not worth the resources required to drive the improvement. For this reason, it is imperative to agree at the start on what constitutes as “good enough,” particularly for a minimum viable product. Ensure that in the process of creating this alignment, you require the stakeholder to provide tangible evidence of the link between model performance metrics and business outcomes. For example, evidence on how incremental lift on targeted emails actually converts to sales and the bottom line. This is important because stakeholders will often use “gut feeling” to decide on what good enough is and this can lead to chasing results that don’t create a meaningful difference to the business.

It is also critical to set realistic expectations on what is possible and when it can be delivered. Non-data scientists can sometimes have unrealistic expectations of what the technology can deliver and often fall victim to the hype cycle. Be clear that a data science project carries greater risk than a typical engineering project due to the experimental nature of the work.

3. Make the MVP solution feel familiar and trusted.

To understand what we mean by making the solution feel familiar, just think about basic smartphone icons. Icons for the Calculator, Notes, and Camera are all designed so that the user already knows how to use these applications before he or she opens them. This same kind of design thinking should go into your MVP data science projects. In the data product context, this often means incorporating elements of existing reports and powerpoint presentations into your user interface.

My team did this effectively when we recently deployed predictive sales forecasting to replace an existing manual process run by our sales strategy teams to forecast quarterly sales revenue (This solution ultimately evolved into Einstein Forecasting). When we created a dashboard to communicate the machine learning-driven forecasts to sales leaders, we used the same graphs that the sales strategy team used in a slide presentation to show the manual forecasts. This meant that our sales leaders immediately understood relevant aspects of the forecast (e.g. the variance and historical revenue) with no explanation required. This “built in” familiarity meant that they were quick to start adopting our solution.

Using familiar business terminology also makes the experience more impactful. Invite the leader who owns the current process to join project standups and reviews, and be sure to report the accuracy in terms that make sense to the stakeholder. For example, lift in targeted email conversion will be much more interpretable for a marketer than the Area Under Curve (AUC) score of a classification model.

Furthermore, in order for the stakeholder to trust in the data science solution, you should always report the underlying factors that are driving model outputs. If you are building a solution to predict housing prices, make sure the factors that impact the forecast for each house, such as square footage and age, are reported along with the forecast. I have found when building internal data products for stakeholders within your organization, having clear metrics of the model’s accuracy right on the report/dashboard/UX is another good way to get the stakeholder to trust your solution.

Finally, design model outputs to hide details that can be misinterpreted or used in unintended ways (e.g. a bucket of “Leads Most Likely to Convert” as opposed to “Lead X has an 85% conversion probability”).

Follow these tips and your stakeholders will soon be evangelizing your solution better than you ever could! Ultimately the business outcomes will be better and you will drive more value for your organization.

Robin Glinton
Author
Robin Glinton

Robin Glinton is VP of Data Science Applications at Salesforce.com. He leads a team dedicated to understanding the adoption of Salesforce products as well as applied research in machine learning-driven CRM offerings. Robin has held a number of positions across startups, industry, and academia.