re-stoked: "On Bullshit" (11/21/14)

The other night we were asked the usual questions about talking to customers by a few very well meaning product people. 

  • "What do you do about people trying to please you, tell you what you want to hear?"  

  • "How do you establish rapport but stay objective?" 

  • "How do you avoid leading the witness?"

  • "How do you deal with people telling you they do one thing but you're pretty sure they do something else?"

We would love to know when people first encounter these ideas. They seem to be truly viral in nature - the first time you encounter focus groups somehow you just know that these problems exist.  Just like you simply know that groupthink is a thing that happens in every focus group, whether the conditions that create groupthink are present or not (this is the difference between understanding a concept and just using the term). 

The truth is, asking people to report their preferences and behavior is not always the best way to get the information you need to make decisions. Qualitative methods use conversations and projective techniques and group exercises to get information.  Quantitative methods use a combination of forced choice, scaled responses,stated and derived importance [PDF]. Ethnographic approaches use observation and documentation. 

But too often, we see these tools operating in isolation. A customer panel that is not correlated with an email subscription list. A customer database disconnected from social media followers. Dozens of points of connection, but no knowledge of how many nodes in the web a particular customer or prospect touches. 

So we get lots of information. We begin to sift for strong correlations. We start noticing patterns, and we tune those patterns into signals. We rely on those signals to make decisions. But there is signal, and there is noise. 

We can't rely on people to remember, faithfully and accurately, what time of day they log in to a particular social platform, how much time they spend there, who they follow and like, and what content they share (that's what analytics and actually going and looking at their real behavior are for!). 

But for some reason, we are way more critical of ordinary people who benignly misreport their behavior and preferences than we are of marketers and publishers who routinely do exactly the same things, especially when it comes to reporting on ROI or effectiveness. 

They tell their customers what they want to hear. They get too cozy and lose their objectivity. They 'juke the stats' to make themselves look good. They tell you data says one thing, when it actually says something else.

So when we saw this piece about Twitter ROI and bullshit, we mentally high-fived Bob. We all need to be much more skeptical about reports of the effectiveness of a particular tactic or channel or promotion. We all need to question not only the conclusions of research, but the methods for collecting information, and the methods for analyzing it. 

You might have guessed by now that skepticism is a theme with us. We'll keep talking about it. But for now, when you see reports about the return on investment in any tactic, platform, message or feature, ask yourself these three questions:

  • Says who (and what's in it for them)?

  • What factors did they take into account - and how did they calculate them?

  • What *aren't* they taking into account?

And then look around for a salt lick, because you're going to need it.