What NOTA and the Three Stooges Reveal About Research
The events leading up to and following the Nov. 8 presidential election have been nothing short of an emotional tsunami making waves across the globe, from world politics to Facebook relationships and even rippling into Thanksgiving dinner conversations.
From a bipartisan perspective, I’m concerned about how such a divided nation moves forward, regardless of which side of the fence you’re on. From a market research perspective, I’m fascinated by the “fenceless” – those more popularly referred to as the NOTA (none of the above) voters, protest voters or undervoters who opted to leave their selection for president blank.
2016 Increase in NOTA Voting
The Brand Optimization Checklist
Looking for a Brand Optimization Checklist? Every so often, it can be useful to step back and evaluate how well your brand is defined and what, if any...Read more
In fact, a surprising percentage of the population was so disenfranchised by their options that they chose none (which doesn’t even take into consideration the possible write-ins that most states allow). Notably, George W. Bush and governors from Maryland, Massachusetts and Ohio all publicly acknowledged their decision to select “blank for president.” Michigan recorded close to 88,000 voters who cast a ballot but did not cast a vote for president. In fact, some political analysts believe that NOTA should be a nationwide ballot option, as discussed in this post.
I’m deliberately avoiding any commentary about the election itself, and what generated the surging sentiment against selecting the lesser of two evils. Instead, I’m interested in exploring the following conundrum: How in qualitative market research can we credibly and confidently anoint “Creative Concept X” or “Message Y” the winner when our respondents may be of a similar mindset to those hundreds of thousands of NOTA voters in the presidential election?
I fully acknowledge that we’re talking about two very different levels of decision-making – one being marcom materials and the other being the selection of a leader of the free world. Still, the idea should at least give you pause and trigger a mental look-back to ask yourself, “Is that concept, message or positioning statement we just recommended to our client really the winner based on respondent ‘votes’? Or, should we question how respondents relenting to a forced choice will result in the least worst winner?”
The Three Stooges Test
It was the election that inspired me to write this post, but it wasn’t the spark of the idea. In fact, I’ve had a philosophy for years that I call the Three Stooges Test. For me, it means taking time early in a project to humbly consider from which options we’re asking respondents to choose, and then later during analysis to pressure-test whether our recommendation merely represents the smartest idiot: selecting Moe (the “smartest stooge”) over Larry or Curly.
Here are a few methods to mitigate the artificiality of testing stimuli in a vacuum:
- Unaided upstreaming. Let’s say you are going to test a set of alternative creative concepts (e.g., medical journal ads) to support the launch of a new arthritis medication. All five concepts are focused on various types of hand imagery, based on the premise that it’s a patient’s hands that are most affected or associated with the disease. As part of your discussion guide, consider exploring patient-physician conversations, or try to capture respondent beliefs about where the disease most profoundly impacts a patient’s life. This provides an unaided read on whether the premise upon which these concepts are based really does align, or flies in the face of, what your respondents believe or have experienced.
- Analogue behaviors. When testing messages or materials, we’re often seeking to gauge how these may motivate a behavior, such as likelihood to consider, buy, prescribe, etc. So we’re directed to ask respondents point-blank, “After reading these messages seeing this concept or reading these messages), what is your likelihood to ______(behavior)?” This line of questioning may be necessary. But you might also consider presenting an analog situation and asking about actual behavior in response to that situation. For example, if you are asking physicians about their receptivity to a new medication to treat a disease, you might also ask him or her to share an experience that demonstrates receptivity (or not) to a new medication in a therapeutic class with similar market dynamics. Then the dialogue can reference and compare the analog to the current answers, such as: “Earlier you said that you like to ‘wait and see’ before trying something new, but when we talked about this drug, you said you’d try it right away … What’s different about this drug that you’re less inclined to wait?” Prior behavior can provide a window of predictability into future behavior, which you can use as a reality check on what respondents said compared with how they might behave.
- Best practice comparisons. Let’s take package testing as an example. Rather than focusing your questions exclusively on evaluating the packaging, consider a homework assignment that asks respondents to identify an example of the best packaging they have seen (related or completely unrelated to the product or industry of focus). Discuss the assignment during the interview before sharing packaging prototypes to understand what are the aspects and attributes they like, why it stood out, etc., so that you have a baseline comparison with what you’re testing. Now you have a better anchor for your 1-7 rating of the packaging stimuli, using the respondents’ definitions of what they like or what they consider standout.
I understand that it’s not always realistic to insert these hedge-type questions – sometimes you’re required to test stimuli in a particular way. But when you can, at least pause to think about the candidates you’re placing in front of your respondents, the absence of NOTA, and whether the research outcome is prepared to pass the Three Stooges Test.