Category Archives: surveys

Designers are from Venus, Six Sigmas are from Mars

DT has a great post over at Design Sojourn that discusses Six Sigma methodology and how it relates to design. He cites Tim Brown at IDEO who argues that Six Sigma is essentially Newtonian, while design thinking is quantum. In his own design work, DT expressed doubts about using Six Sigma:

After studying the Six Sigma process, I point blank said: “There was no way any of my designers are going to be judged on the quality and success of a design based on how many sketches or iterations we did before we deliver it.”

Both Brown and DT cite Sara Beckman, who recently discussed the topic in the New York Times. Beckman reviews how Six Sigma focuses on incremental improvements, while design and design thinking focuses on big changes. For those of you who aren’t familiar with Six Sigma, it’s a method pioneered by Motorola, which aims to reduce the number of errors to 3 in one million. The “six sigma” refers to six standard deviations. The number of errors should be at the extreme end of the normal curve, or between + or – 3 standard deviations, represented by the Greek symbol sigma.

I argue that design is more complementary to the “interpretivist” paradigm of qualitative research while Six Sigma is positivist. Interpretivists don’t believe the world is a static place. They see reality as being continuously created by you, me and other social actors. There is no such thing as “The Truth” in interpretivist approaches, just different versions of the truth. Typical methods of interpretivists are ethnography, in-depth interviewing and discourse analysis. Positivist research, on the other hand, assumes that reality is static. Positivists believe that “The Truth,” is out there to be discovered. Typical methods would include quantitative surveys.

Designers should focus on interpretivist methods, therefore. They should uncover different versions of the truth using observation and interviewing, as well as deep reflection on symbols and their meanings. Surveys and other quantitative methods are more Six Sigma in that they can measure improvement over time. Designers ought to consider measuring improvement, but starting with qualitative approaches is best.

Advertisements

Customers more satisfied when served by white males

In an interesting study, researchers at UBC have found that customers express higher satisfaction when they’re served by white men than by women or people of colour — even when their behaviour is exactly the same. Marketing professor Karl Aquino expressed surprise at the findings, as he told The Globe and Mail

“We had thought there would be some bias going on in the sense of people who were males or whites would be rated more positively,” Mr. Aquino said

“But we didn’t anticipate that for performing the same behaviours, the women and minorities would actually be rated lower,” he said of the study to be published in the Academy of Management Journal.

This study should not be surprising at all.

What this study demonstrates is what Raymond Breton calls the “symbolic order”; we unconsciously place white men at the top of our social hierarchy. We do this in multiple ways, including placing art, culture and ideas at the top of an invisible ladder. Public Enemy sums it up nicely in “Fight the Power”:

None of my heroes don’t appear on no stamp

We know that people have largely unconscious reactions of sexism and racism, oftentimes without even realizing it. It is likely that these unconscious ideas bleed into marketing research easily, especially when such studies are quantitative in nature, and therefore lack the thick description or deep probing offered by qualitative approaches.

This finding has wide-reaching implications. First, when companies use customer satisfaction surveys, they must be aware of the inherent inaccuracy of these surveys. You may believe you’re accurately measuring actual satisfaction, but this study shows that frequently, we don’t measure any such thing. Secondly, such surveys are often used to award bonuses or even job security. As we know in academia, student evaluations are frequently what stands between a scholar and a full-time position. If we know that customer satisfaction is driven by factors other than actual performance, then we are likely to be unwittingly simply rewarding membership in a dominant group.

Read the entire story on The Globe. It’s worth a think.

Improving participation rates: research recruitment best practices

Those of you out there who’ve tried it know: recruiting research participants is HARD. Here are a few insights from the research to help you with better recuitment.

  1. Personalized contact with respondents, followed by pre-contact and aggressive follow-up phone calls *: Don’t count on a form letter, email or random tweet to do the job. Capitalize on your personal relationship with that person. If you don’t have a personal relationship, ensure that you use the person’s name, and for God’s sake, spell it correctly!

    Once you’ve made initial contact, you are not done. Not by a long shot. Make sure you speak to the person (you can do this through IM or email if you’d like) to give them more information. They’re now interested. Don’t stop! One more step!

    Follow up 1 week after initial contact. Assuage any fears they may have. Answer any questions honestly. And above all, be available for more information.

  2. External researchers with social capital are best**: University-based researchers have been shown to have the best participation rates, but you don’t have to be a professor.  Researcher Sister Marie Augusta Neal of Emmanuel College achieved a near perfect response rate because of her close ties to the respondents and their communities. The lesson here is, if you hire a consultant, make sure they’re trusted. Even better if they personally know the people to be recruited.
  3. Monetary incentives have no effect, unless money is offered “no strings attached”***: Little known fact: the best way to use a monetary incentive is to offer it, up front, with absolutely no strings attached. The “free” money makes people feel more indebted socially. Evidence of this effect can be found in the book Freakonomics. Researchers found that daycare centres that levied late penalties on tardy parents actually had more of a late-pickup problem than those that levied no fine. Why? Because the parents reduced their relationship to the daycare as a mere transaction. Use the “gift economy” approach and ensure a feeling of indebtedness. My personal favourite is a coupon for a single iTunes song at $.99. It is cheap but appears to have great value. Offer it, up front, and then ask for participation

*  Cook, C., F. Heath, and R. Thompson. 2000. “A Meta-analysis of Response Rates in Web or Internet-based Surveys.” Educational and Psychological Measurement 60:821-836.

** Rogelberg, S., A. Luong, M. Sederburg, and D. Cristol. 2000. “Employee Attitude Surveys: Examining the Attitudes of Noncompliant Employees.” Journal of Applied Psychology 85:284-293.

***Hager, M., S. Wilson, T. Pollak, and P. Rooney. 2003. “Response Rates for Mail Surveys of Nonprofit Organizations: A Review and Empirical Test.” Nonprofit and Voluntary Sector Quarterly 32:252-267. Singer, E. (2006) Introduction: Nonresponse Bias in Household Surveys. Public Opinion Quarterly, 70, 637-645

Online Surveys 101

Folks,

Below is a (very!) brief overview of online surveys. This slideshow, via slideshare, is intended for people in the Web design industry. IAs, designers, media planners, strategists, usability researchers, and producers will learn if they should, in fact, do a survey.