Many designers are self-taught, intuitive consumers of research who can translate insights into great designs. But few are trained in the arcane art of research itself. For that reason, many designers don’t know the finer differences between qual and quant research and end up using their respective results inappropriately.

Quantitative research is based on the assumption that random events are predictable, and if you compare your results to pure random results, you can discern distinctive, meaningful patterns about the social world.

Random events are relatively MORE predictable if you have more of them. Imagine if you flipped a coin 20 times. How many heads would you get? Now if you flipped it 20,000 times? You’re more likely to get an even 50/50 split — which is what most people would predict. If you got a 65/35 split with 2o flips, okay, could happen. But with 20,000 flips? No way. Something else is going on.

Translate that to design research by looking at gender, for example. Let’s say you have 20 people, 10 men and 10 women. 65% of the women choose one design, while only 35% of the men do. Is this a meaningful pattern? Impossible to say — you only have 20 people. Now if you had 200 people (100 men and 100 women) and 65 of the women chose one design, chances are you have a meaningful pattern.

This is why sample size matters in quantitative research. But, little known fact, sample size is COMPLETELY IRRELEVANT in qualitative research. Why?

Qualitative research assumes that people have meaningful experiences that can be interpreted. Notice how there’s nothing in there about “prediction” or “randomness.” People have experiences. Researchers discern what these experiences signify. That’s it. Sample size is not only irrelevant, it actually gets in the way of important insight.

Consider the case study, for example. Few people would say case studies are useless. We can learn a great deal about a single design case, where it went wrong and where it went right. The problem comes when you try to predict future events based on this single event.

If you abandon the need for prediction, then sample size never matters. You can always derive insight about design problems from even a single case. Designers that attempt to predict “success” of a single design change, for example, should test that change, repeatedly, with a probability sample.

### Like this:

Like Loading...

*Related*

Pingback: Why customer satisfaction surveys are useless « Design Research

Pingback: qualitative vs quantitative research

Describe any three paradigms and outline their advantages and disadvantages

Pingback: Qualitative versus quantitative research, Part II « Design Research

great post! i have two points/questions that i’d love your thoughts on:

1. even with the researcher playing the important role of interpreter in qualitative research, the paradigm is still that the observations and associated insights can be generalized, right? with that in mind, it seems like sampling error is still a concern. for example, if i want to learn about how to design for truck drivers, and i do a ridealong with a single truck driver (a typical technique for product design research), i am assuming that what i observe is representative of truck drivers in general. needless to say, though, this is not necessarily true. therefore, it seems to me that sampling error is an issue, and that the way to avoid is it through a larger sample size.

2. if my first point is correct, and sampling error IS a concern, then what is an acceptable sample size? we can’t do level of confidence calculations, so what standards can we use to answer the question?

Hi Finn,

No, sampling “error” is never a concern with qualitative research because no probability sampling techniques are used.

You do not generalize the same way with qualitative research. Instead of measuring the difference between your sample’s results and the population’s, that is, instead of calculating a confidence interval, you use the qualitative notion of “trustworthiness.”

Trustworthiness was developed to complement the qualitative approach by Denzin and Lincoln (editors of the Handbook of Qualitative Research). It suggests that “generalizability” is a function not of math but of “confirmability” (can this be confirmed by other researchers), “credibility” (do your participants “buy” what you’ve said?), “transferability” (are there dominant themes that could be appropriate for other contexts?) and “dependability” (would other researchers likely find the same themes that you did?).

As you can see, the only way to find trustworthiness is to go back to your participants and/or go to other researchers. It’s time consuming (not as easy as just calculating a confidence interval, huh!).

But how do you know when your sample size is big enough? You find what they call “saturation,” which is when you start to get the same kind of answers. This usually happens around the 5th or 6th interview.

So there you have it: more time, energy, effort, and MONEY! Man, qualitative research is such a rip-off! Ah…so if you want to generalize to the population? Do a quant survey AFTER you do your interviews. You can do some confidence calculations then.

thanks for the reply! just to clarify, i understand that sampling error doesn’t apply to qualitative research in the literal sense of doing calculations, but assuming that generalizability is a concern, then a researcher needs some means of knowing that he or she has achieved some kind of external validity (and i understand that you could have a big philosophical conversation about that topic, too). it sounds like saturation is exactly what i was looking for, and i’ve definitely experienced that myself, usually around 5-6 interviews as you said.

thanks again!