Category Archives: sample size

The essence of qualitative research: “verstehen”

“But how many people did you talk to?” If you’ve ever done qualitative research, you’ve heard that question at least once. And the first time? You were flummoxed. In 3 short minutes, you can be assured that will never happen again.

Folks, qualitative research does not worry about numbers of people; it worries about deep understanding. Weber called this “verstehen.” (Come to think of it, most German people call it that too. Coincidence?). Geertz called it “thick description.” It’s about knowing — really knowing — the phenomenon you’re researching. You’ve lived, breathed, and slept this thing, this social occurrence, this…this…part of everyday life. You know it inside and out.

Courtesy of daniel_blue on Flickr

Courtesy of daniel_blue on Flickr

You know when it’s typical, when it’s unusual, what kinds of people  do this thing, and how. You know why someone would never do this thing, and when they would but just lie about it. In short, you’ve transcended merely noticing this phenomenon. Now, you’re ready to give a 1-hour lecture on it, complete with illustrative examples.

Now if that thing is, say, kitchen use, then stand back! You’re not an Iron Chef, you are a Platinum Chef! You have spent hours inside kitchens of all shapes and sizes. You know how people love them, how they hate them, when they’re ashamed of them and when (very rarely) they destroy them. You can tell casual observers it is “simplistic” to think of how many people have gas stoves. No, you tell them, it’s not about how many people, it’s about WHY they have gas stoves! It’s about what happens when you finally buy a gas stove! It’s about….so much more than how many.

Welcome to the world of verstehen. When you have verstehen, you can perhaps count how many people have gas stoves. Sure, you could determine that more men than women have them. Maybe you could find out that more of them were built between 1970 and 80 than 1990 and 2000. But what good is that number? What does it even mean?

When you’re designing, you must know what the gas stove means. You must know what it means to transform your kitchen into one that can and should host a gas stove. You must know why a person would be “ashamed” to have a gas stove (are they ashamed of their new wealth? do they come from a long line of safety-conscious firefighters?). You must know more than “how many.”

So the next time someone asks you, “how many people did you talk to?”, you can answer them with an hour-long treatise about why that doesn’t matter. You can tell them you are going to blow them away with the thick description of what this thing means to people. You are going to tell them you know more about this thing than anyone who ever lived, and then, dammit, you’re gonna design something so fantastic, so amazing that they too will be screaming in German. You have verstehen!

See my discussion about sampling methods in qual and quant research for more insight into the reasons why “how many” is irrelevant in qualitative research.

Improving participation rates: research recruitment best practices

Those of you out there who’ve tried it know: recruiting research participants is HARD. Here are a few insights from the research to help you with better recuitment.

  1. Personalized contact with respondents, followed by pre-contact and aggressive follow-up phone calls *: Don’t count on a form letter, email or random tweet to do the job. Capitalize on your personal relationship with that person. If you don’t have a personal relationship, ensure that you use the person’s name, and for God’s sake, spell it correctly!

    Once you’ve made initial contact, you are not done. Not by a long shot. Make sure you speak to the person (you can do this through IM or email if you’d like) to give them more information. They’re now interested. Don’t stop! One more step!

    Follow up 1 week after initial contact. Assuage any fears they may have. Answer any questions honestly. And above all, be available for more information.

  2. External researchers with social capital are best**: University-based researchers have been shown to have the best participation rates, but you don’t have to be a professor.  Researcher Sister Marie Augusta Neal of Emmanuel College achieved a near perfect response rate because of her close ties to the respondents and their communities. The lesson here is, if you hire a consultant, make sure they’re trusted. Even better if they personally know the people to be recruited.
  3. Monetary incentives have no effect, unless money is offered “no strings attached”***: Little known fact: the best way to use a monetary incentive is to offer it, up front, with absolutely no strings attached. The “free” money makes people feel more indebted socially. Evidence of this effect can be found in the book Freakonomics. Researchers found that daycare centres that levied late penalties on tardy parents actually had more of a late-pickup problem than those that levied no fine. Why? Because the parents reduced their relationship to the daycare as a mere transaction. Use the “gift economy” approach and ensure a feeling of indebtedness. My personal favourite is a coupon for a single iTunes song at $.99. It is cheap but appears to have great value. Offer it, up front, and then ask for participation

*  Cook, C., F. Heath, and R. Thompson. 2000. “A Meta-analysis of Response Rates in Web or Internet-based Surveys.” Educational and Psychological Measurement 60:821-836.

** Rogelberg, S., A. Luong, M. Sederburg, and D. Cristol. 2000. “Employee Attitude Surveys: Examining the Attitudes of Noncompliant Employees.” Journal of Applied Psychology 85:284-293.

***Hager, M., S. Wilson, T. Pollak, and P. Rooney. 2003. “Response Rates for Mail Surveys of Nonprofit Organizations: A Review and Empirical Test.” Nonprofit and Voluntary Sector Quarterly 32:252-267. Singer, E. (2006) Introduction: Nonresponse Bias in Household Surveys. Public Opinion Quarterly, 70, 637-645

Qualitative versus quantitative research

Many designers are self-taught, intuitive consumers of research who can translate insights into great designs. But few are trained in the arcane art of research itself. For that reason, many designers don’t know the finer differences between qual and quant research and end up using their respective results inappropriately.

Quantitative research is based on the assumption that random events are predictable, and if you compare your results to pure random results, you can discern distinctive, meaningful patterns about the social world.

Random events are relatively MORE predictable if you have more of them. Imagine if you flipped a coin 20 times. How many heads would you get? Now if you flipped it 20,000 times? You’re more likely to get an even 50/50 split — which is what most people would predict. If you got a 65/35 split with 2o flips, okay, could happen. But with 20,000 flips? No way. Something else is going on.

Translate that to design research by looking at gender, for example. Let’s say you have 20 people, 10 men and 10 women. 65% of the women choose one design, while only 35% of the men do. Is this a meaningful pattern? Impossible to say — you only have 20 people. Now if you had 200 people (100 men and 100 women) and 65 of the women chose one design, chances are you have a meaningful pattern.

This is why sample size matters in quantitative research. But, little known fact, sample size is COMPLETELY IRRELEVANT in qualitative research. Why?

Qualitative research assumes that people have meaningful experiences that can be interpreted. Notice how there’s nothing in there about “prediction” or “randomness.” People have experiences. Researchers discern what these experiences signify. That’s it. Sample size is not only irrelevant, it actually gets in the way of important insight.

Consider the case study, for example. Few people would say case studies are useless. We can learn a great deal about a single design case, where it went wrong and where it went right. The problem comes when you try to predict future events based on this single event.

If you abandon the need for prediction, then sample size never matters. You can always derive insight about design problems from even a single case. Designers that attempt to predict “success” of a single design change, for example, should test that change, repeatedly, with a probability sample.