There we were in a large, windowless conference room in Santa Clara, temporarily shielded from the ideal weather conditions outside. We were working for a Fortune 500 client, tasked with improving renewals and upsells. A colleague of mine was explaining high-level details of a predictive model he built to the marketing team to help with their renewal problem. Our carefully constructed narrative was quickly thrown off course by the data analysts in the room who inquired as to the nuances of missing values, overfitting and statistical significance. Despite our best efforts, the meeting soon turned into a full-on geek-out session.

Many of the marketers were justifiably lost. The looks on their faces fell into two camps: bewilderment or disinterest. The email marketing manager at my right started to scribble doodles in her notepad. The director of marketing started desperately asking questions to help get reoriented in the discussion.

Have you ever been in a situation like this? Have analysts come in to explain your campaign performance and left you in a state of disinterest, bewilderment or both? If so, this blog post is for you. I’ve spent the last 15 years of my career living at the intersection of data and marketing and hope to provide a few analytics hacks to help you navigate these types of situations.

These hacks will come in the form of mental models and are inspired by the life and work of Charlie Munger who said:

“You’ve got to have models in your head. And you’ve got to array your experience — both vicarious and direct — on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.”

I hope to offer you a couple of mental models to make sense of some pretty complicated analytics that may come across your desk.

Let’s get started.

Imagine that your analyst comes to you with the results of your latest campaign. She says: “This campaign was successful because…”

1. Be Aware of the Narrative Fallacy

The first hack is to simply beware the narrative fallacy. I first read about the narrative fallacy in the works of Nassim Nicholas Taleb. As marketers, we’re first and foremost storytellers. And story is undoubtedly the best way to convey something memorable to another human. In fact, one of our customers recently excitedly told me “Percolate helps us tell our story”. That made me smile. So to be perfectly clear: I’m all good with story.

The narrative fallacy creeps in when you (or your analyst) have a pile of data and need to make sense of it. One example is campaign results. I’ve found myself in this position many times. You see a pattern in the results and think: “The reason that the data looks this way is because…” If you’ve ever thought that, you might have fallen victim to the narrative fallacy.

The error emerges when you apply a narrative retroactively to data to try to make sense of it. In reality, the narrative you choose is just one of many possible explanations for what happened. Don’t forget that you made that narrative up.

Keep the narrative fallacy in mind the next time an analyst, vendor or agency comes in to present results. Ask yourself: What data are they not showing me? What are some alternate narratives for the data they are showing me? Why should I be confident in their story? If you get in this habit, you’re already one step ahead of the game.

2. Make Sure You Have a Margin of Safety

Okay, so let’s imagine that you’ve pressure tested the narrative and are generally satisfied that it provides a reasonable explanation of the results. The analyst continues: “This campaign was successful. We’ve seen a 2.8% increase in conversions when compared to our control group.”

This leads us to our second hack: applying a margin of safety. This idea originally comes from engineering. Engineers have learned that you always want to build a bridge to support far more weight than it can reasonably expected to hold. This margin of safety makes your bridge (or your decision-making) more robust when the unexpected happens.

As marketers we’re dealing with growing complexity — and noise. A margin of safety helps me separate real trends versus noise. We might have measured an increase in conversions, but some of that might be attributable to luck or to incorrect measurement; we need to set a margin of safety that lets us be confident that our findings really indicate something. I apply this concept to all sorts of decisions: Do these campaign results represent true incremental benefit to the business? Should I invest in this new technology if it takes twice as long to implement as the vendor estimates? Are the trends my social media manager has identified real, or are they just a natural fluctuation of the numbers?

Let’s say you’ve checked for the narrative bias and have made sure to apply a margin of safety. What else is there to worry about?

3. Don’t Fall Prey to Insensitivity to Sample Size

The analyst concludes: “This campaign was successful. We’ve seen a 2.8% increase in conversions when compared to our Control Group each time we run a similar campaign.”

But how many such campaigns were run?

The final hack is to make sure you inspect the sample size. The sample size in a marketing context is most often just the number of customers or prospects you’re reporting on. The trouble is that many times, those raw numbers are hidden behind a statement like: “We saw a 30% increase in account opens.” If you see a statement like this, it’s a good idea to inspect the data more closely because, if the sample size is small enough, it’s entirely possible that that summary statement will mislead you.

Insensitivity to sample size is a well described phenomenon and is best described by one of my favorite blogs Farnam Street:

“While we all understand it intuitively, it’s hard for us to realize in the moment of processing and decision making that larger samples are better representations than smaller samples.”

To use this hack, just ask the analyst: What’s the sample size? If it’s small you might continue: Why didn’t we run this campaign longer? Are we really confident of these results?

That was just a quick tour of some mental models that can help you ask incisive questions of anyone who puts reports with numbers in front of you. The beauty of this structured approach is that you’re less likely to let simple mental shortcuts fool you into believing the numbers that come across your desk — and you’re better equipped to ask the right questions when the quant analysts take the room.