Quantcast
Channel: You searched for nova/feed/rss2/index.rss/page/64/index.rss/page/9/index.rss/page/63/index.rss - DemandGen
Viewing all articles
Browse latest Browse all 347

A Primer on Predictive Analytics: Tips for Road Testing Your Model

$
0
0

predictiveanalyticsroadtest

For many marketers, predictive analytics is a bit of a black box, especially if Statistics wasn’t exactly your favorite course in school. Lately, we’re being inundated with vendors offering predictive lead scoring, look-alike modeling, and predictive content, just to name a few categories. We assume that, like a vehicle, if we put in gas and step on the pedal, it goes. For all its wild promises, predictive analytics is a major investment, so, without becoming a data scientist, how do you know what’s going on under the hood?

Fuel Your Model with Good Data

It should come as no surprise for the modern marketer that it’s all about the data. Every predictive modeling technique requires a quality and quantity of data, or “variables,” to output accurate predictors. When reviewing your in-house data, most vendors will be able to provide minimum sufficiency requirements, and they’ll look at two criteria: the number of records (such as Leads, Contacts, Opportunities) and the number of target events (what your model is intended to predict, such as conversion to MQL or pipeline, or a won new customer).

It’s also really important to include records where the target event did not occur: for example, when a lead is disqualified rather than converting to MQL. Doing so is absolutely key when it comes to testing the predictive model, which we’ll come back to later.

You may not want to (or, let’s face it, be able to) rely solely on your own data to build the model. For example, if you believe that company size is an important predictor of likelihood to achieve the target event, you need to ensure that your records have that piece of information, that it’s correct, and that it’s structured in a way that the model can consume.

Working with a predictive vendor can open a whole new universe of data for your model. These vendors have almost all sourced the typical demographic and firmographic data vendors like Dun & Bradstreet, and many have also partnered with advertising and content networks to provide engagement and interest data. Ask your vendor for their list of data sources as well as the variables – this information will help you understand the predictors returned by the model and how you might apply them.

Once compiled, part of your dataset must be peeled off to build the model. The remainder will be used to test or train the model once it’s derived since you already know whether the desired outcome was achieved.

Test and Tune Up Your Predictive Model

Once you’ve fed the variables into your model and pointed it to that target event, regression analysis will take on the exercise of identifying correlation: which variables are most and least correlated to the occurrence of the target event. There are actually many types of time-tested regression analysis techniques, and the model may have to be iterated to get to the best combination of variables.

But you don’t need to become a data scientist to get through this part; the testing of the model will truly surface whether the vendor is able to provide a quality model or not. Be brave: if your vendor doesn’t volunteer the test results, ask them to demonstrate how well the model actually predicts a positive or negative outcome. Look at the lift charts that measure the model’s effectiveness: do the curves look well defined or are they flat? Review the predictor’s output by the model and look for outliers. Be careful of those false positives: for example, if the model states that there is high correlation between the receipt of a particular email and conversion to an MQL, it may be because everyone in your database was sent that email. Remember, you know your data and your ideal customer profile better than the vendors.

Take to the Streets: Operationalizing the Model

Once you feel confident in the model, it’s recommended that you pilot or test it against your live data. A soft launch will give you additional time for testing and validation, and it make take some time depending on how long it typically takes to achieve your target event. As you roll the program out, you’re probably going to get a lot of questions from your stakeholders about how the lead scoring works, or what’s driving the look-alike recommendations, etc., depending on how you are applying predictive analytics. With the fundamentals outlined in this article, you can give your stakeholders better understanding of, and therefore more control over, the process.

Just like your vehicle, your model will need periodic check-ups. It should be rebuilt periodically (likely at minimum every 6 months), triggered either manually by the vendor or even by self-learning. Don’t be content to sit back and let the black box do its thing; the one constant in all models is that there will be change!

______

Gaea Connary DemandGen HeadshotGaea Connary serves as Manager of Agile Transformation at BDO Digital. She is continually fascinated by creativity and agility in developing marketing technology, and has helped many organizations revitalize their marketing efforts with hands-on guidance and innovative tech applications.

The post A Primer on Predictive Analytics: Tips for Road Testing Your Model appeared first on DemandGen.


Viewing all articles
Browse latest Browse all 347

Trending Articles