Schober: Customer Interviewing

By now, entrepreneurs should know how much they can gain by getting out and talking to potential and loyal customers. But what to ask and how to ask it? The last thing the entrepreneur wants to do is to return home with reams of dubious confessions, red herrings and false confirmations.

Michael Schober is Professor of Psychology at the New School for Social Research in New York City and Associate Provost for Research at the New School. He is also a world expert in survey and interviewing techniques.

Dr Schober Bio

Dr. Schober’s latest articles:

Text and Voice Interviews on Smartphones

Understanding Interaction In Survey Interviews

Survey Interviews With Virtual Agents

Your Task
A. After reading through some of Dr. Schober’s insights, design an interview that you can take out to customers at this point to gather information that will help you in moving forward with your idea.

B. Interview some potential customers relevant to your business. How many people did you interview, how many did you survey and what is their demographic make-up?

C. Transcribe and summarize their results.

D. What are some observations you have about your interviewing process and how it went?

1. Entrepreneurs are encouraged to get out into the world early and talk to potential customers about their needs and their opinions on target products and services. What are some advantages and potential pitfalls you see in this sort of face-to-face interviewing?

Obviously finding out about potential customers’ needs and opinions is crucial. But how exactly to do this most effectively is not entirely straightforward, and it may vary for different segments and kinds of customers.

There are a number of basic facts and research findings from the world of survey research that can be informative about the complexities.

A. Figuring out who to talk with can make all the difference in what you learn. In the survey world this is the problem of figuring out the right “sample frame.” Relying on your own social networks may give you exactly your target audience for your idea, but chances are that people outside your usual range have something to tell you that you may really need to know.

B. The people who agree to talk with you may be different in important ways than the people who don’t agree to talk with you. In the survey world, this is called the problem of “nonresponse”—even if you selected a set of people to contact who (collectively) perfectly represent your target audience, you could end up seriously misunderstanding what you have learned if the nonrespondents’ opinions or reactions would have been different than those of the respondents. (This is why worrying about a low response rate isn’t actually the right worry; a low response rate is fine if the nonrespondents don’t differ from the respondents, and a high response rate can hide big problems if the nonrespondents do differ).

C. Finding out what someone REALLY thinks about something they may not already have thought about is hard. A face-to-face or telephone interview is inherently time-pressured: there is a social demand for the respondent to give the kind of answer that can be given in the back-and-forth of conversation (even if the interviewer encourages the respondent to take his or her time). We have found that people are more likely to give estimated (rounded) answers and to “satisfice” in voice interviews compared with asynchronous text interviews (Schober et al., under review); they are also less likely to disclose sensitive information in a voice interview than in a text interview. Being aware of this dynamic in an interview is an important start.

D. How you word a question can make a huge difference in the answers you get in an interview (see Clark & Schober, 1991; Conrad, Schober & Schwarz, 2014). There are lots of reasons for this; it doesn’t just happen in formal interviews but is part of ordinary conversation (and skilled conversationalists and lawyerly manipulators implicitly know this). And this doesn’t happen only because respondents are trying to hide embarrassing facts or make themselves look good (although that can happen too); it happens because respondents who are trying to be helpful are working hard to understand the intentions behind what you are asking.

The survey methodology literature is full of disturbing and compelling examples of how seemingly minor differences in how a question is asked and in the order that questions are asked in can lead to surprising reversals in responses. Two simple examples: asking how fast witnesses to a car accident think the cars were going when they “crashed” leads to far higher speed estimates than asking how fast they were going when they “touched.” And asking how satisfied people are with their life in general after asking how many dates they go on per month leads to completely different life satisfaction ratings than when the questions are asked in the reverse order. These phenomena are explainable by paying attention to what has been called the “pragmatics of conversation” in studies of interviewing; ordinary features of conversational interaction can be magnified and distorted in interviews (Schober, 1999; Schober & Conrad, 2002).

If you really want to get at the truth in an interview, as opposed to learning only what you want to hear, knowing about these sorts of pitfalls so that you can minimize them is crucial. Even the most socially aware conversationalist may be surprised at the ways that they may be unintentionally influencing responses in an interview setting.

E. People can understand ordinary words in an interview in surprisingly different ways than you intend. In one of our studies (described in Schober, 2005), we asked the seemingly unambiguous question (from a US health survey) “Have you smoked at least 100 cigarettes in your entire life?” When we probed later to find out what our respondents had been including when they gave their answers, we found an astonishing variety of interpretations: people included or excluded pipes, marijuana, cigars, cloves cigarettes, chewing tobacco, cigarettes they hadn’t finished, cigarettes they hadn’t bought, cigarettes they hadn’t inhaled…and when we re-asked the question with a standard definition (include only tobacco cigarettes and count any cigarette you took even one puff of whether you’d bought it or borrowed it), we found that 10% of our sample changed their answer from “yes” to “no” or from “no” to “yes.” No one ever imagined that they hadn’t interpreted the question the way that others did, or the way the researchers intended it; they were confident in their answers, and they assumed that their interpretation provided exactly what was needed.

The fact that so many respondents changed their answers shows that misaligned interpretations can actually affect your substantive conclusions. (For a survey measuring cancer risk, 10% response change is problematic; since this first question in the survey takes the interview down the path of answering questions as a smoker or non-smoker, this means that 10% of the respondents answered the wrong subsequent questions).

We have not yet found a domain of questioning or a term that you could use in a question that doesn’t have this property, for at least some percentage of the population. Even the most straightforward-seeming question—“Last week did you do any work for pay?” or “What is your gender?”—is not so straightforward for someone whose circumstances don’t line up with the presumptions underlying the question, for example someone who was paid in kind, or might have been paid without doing anything they counted as work, or someone who is transgender. For even the most innocuous and frequently-used questions, there will be at least a few people who won’t be able to answer the question simply.

There isn’t a perfect solution to this problem of “conceptual misalignment” between interviewers and respondents (Schober, 2005), but we have found that interviewers who are empowered to probe further to make sure that questions are interpreted as they intend can elicit answers more in line with what they need to know (see Schober, Conrad & Fricker, 2004). Of course, this requires being clear about your OWN definitions for what you are looking for; if you aren’t sure about how you are conceptualizing the domain you are exploring, then you won’t be able to work with respondents to make sure they understanding what you are asking about. In any case, our findings clearly suggest that sticking with a standardized script—simply assuming that respondents are interpreting words in questions in the way you want them to—opens the door to all sorts of measurement error.

We have also found that being alert to respondents’ nonverbal and facial cues when they are answering can be a good hint that further probing may be needed. In the survey interviews we have studied, people are more likely to be disfluent—to um and uh—when they are giving an answer that reflects a more complicated circumstance that warrants further probing, and they are also more likely to avert their gaze from the interviewer (Schober, Conrad, Dijkstra, & Ongena, 2013). These cues aren’t perfect guarantees of trouble, but they do seem to be reliably associated with need for clarification.

Michaeletto

2. How might the passion and dedication of the entrepreneur affect interview results?

Passion and dedication are great—probably essential. But it’s important to be aware that they can blind you so that you don’t see (or even look for) where your respondents may not be understanding questions in the way you intend. You should also be aware that a face-to-face interview isn’t guaranteed to produce the most accurate responding. It has become well-known in the world of survey measurement that on average a greater percentage of people will disclose sensitive or embarrassing information to a computer than to a person, and sometimes more on the phone than face to face. The state of the art these days for making it more likely that people will provide sensitive information in face to face interviews is for the interviewer to hand her laptop to the respondent when the sensitive questions start (e.g., questions about sex partners and drug use), so that the respondent can listen to the embarrassing questions via headphones and type their answers themselves; this way neither the interviewer nor anyone else in the household even knows what question is being asked nor what the answer is. This use of “ACASI” (Audio Computer-Assisted Self Interviewing) has been shown to make respondents more comfortable as well as to disclose more, and greater disclosure has been shown to be closer to the truth in studies where that can be tested.

It seems that a major factor is the interviewers’ face: we found just as little disclosure to a not-very-realistic video avatar in an online interview as we did to the real human whose motion was captured in that avatar, and substantially greater disclosure on at least some questions to the audio-only computer version (Linda, Schober, Conrad, & Reichert, 2013). The situation of the confessional booth may actually be on to something about the best way to get people to disclose information they may not find easy to disclose.

3. Some entrepreneurs will bring early stage prototypes – or minimum viable products – to ground their discussions with potential customers. What do you think of this practice?

Bringing prototypes is a great idea; anything that can ground the conversation and make it easier for the respondent to understand what exactly is under discussion is worth doing. It’s also important to remember that an interview situation, even with a prototype, is still an “as-if” situation; what people believe and claim they would do if they had the real product in their real life may be different than what they do with the prototype in an interview. The more you can make your testing situation similar to the real-world situations that your customers will be experiencing the better.

4. What are some suggestions you would offer to entrepreneurs around how to design interview questions and settings that will elicit genuine feedback from potential customers or partners?

Genuine feedback is hard to get. People can feel uncomfortable telling you that they really don’t like a product to your face, particularly if you are seen as the creator who clearly hopes to find validation and evidence for the viability of the product, and who might feel disappointed or hurt by negative feedback. Opening the door to honest responding is important to do. Even something as simple as saying “some people seem to like the product and others don’t—how about you?” makes clear that you are open to hearing bad news. Your reaction if you hear something negative or surprising will also be hugely informative to respondents and help (or hinder) your getting accurate feedback. If you can be non-defensive and genuinely interested in what they have to say, and be respectful and comfortable in your own skin if they tell you something you hadn’t expected or didn’t really want to hear, you will be ahead of the game.

5. In order to reach a broader audience, some entrepreneurs will solicit feedback from potential customers through social networks or online surveys. What are some advantages and pitfalls in this Web approach?

Social networks can be quite convenient, but it’s important to consider the potential nonresponse bias inherent in them. You are on the most solid ground if the social network truly represents the population that you want your customers to come from. The farther removed the site members are from your customer base, the greater the likelihood that you could run into “coverage” problems. Coverage problems may not be fatal—you can still learn a lot from a non-representative sample—but they open the door to risk.

As for online surveys, they can be enormously convenient for reaching respondents that you otherwise wouldn’t—although be careful in thinking through (and finding out) whether the people who respond to your online survey invitation are different than your target audience. And online respondents may in some cases provide just as reliable information as face to face respondents—and in the case of sensitive topics, probably MORE reliable information.

But there is a major pitfall to be aware of: the ease with which online surveys can be constructed in packages like Qualtrics or SurveyMonkey has led to the deployment of a lot of poorly designed surveys that don’t embody the growing body of knowledge about the best design for internet surveys. Designing good survey questions (wording, ordering, response options, visual layout) is both an art and a science, and there is a reason that people get special training in how to do it right. It is worth your checking with experts on what you might have missed BEFORE deploying your online survey, because once it is released it can be too late, and it is a shame (both for you and your respondents) if you end up with a lot of data that you can’t use.

michael's office

6. Given what you have seen in your research, what are some other methods that entrepreneurs might consider to gather data from potential customers?

New streams of big data can be useful for understanding potential customers; what people are tweeting about can be a leading indicator of consumer sentiment, for example, if you have the right skills and tools and access to do large-scale Twitter analyses. Even without particularly sophisticated tools you might be able to get useful information about trends from searching in Twitter, or looking at what members of the public are searching for in Google. Particularly when it comes to studying new markets, new ideas might pop up in social media before they become mainstream, and this could give you a competitive edge. But just as designing a well-designed survey isn’t trivial, analyzing social media is also non-trivial, with many kinds of knowledge needed to do it well.


7. Have you found any generational differences in terms of how people respond to different interview/survey formats?

It may or may not be a generational issue, but people do seem to vary substantially in their preferences for responding to questions via one mode or another. Some people absolutely prefer talking with an interviewer face to face, and they are unlikely to agree to participate or provide anything useful if that isn’t what they end up with. Other people absolutely prefer a less personal mode, preferring asynchronous interaction and social distance from researchers. Depending on your research question, different modes of questioning may be appropriate, and it might also be appropriate to consider using multiple modes, and giving participants a choice of mode, in order to maximize their likelihood of agreeing to participate and providing the thoughtful high quality answers you are looking for.

8. Anything else that you would add?

If this litany of potential troubles and considerations in interviewing seems daunting, that’s good: that means you understand the gravity of what there is to be concerned about. I don’t mean to suggest that interviewers—even rigid standardized ones—never give usable or important data; quite the contrary. But I do believe that there is more bias and risk of mismeasurement than is ordinarily understood. I also strongly believe that there isn’t a perfect or one-size-fits-all solution to the best way to get information in an interview, but instead that different research questions require different, and sometimes multiple, methods. Sensitizing yourself to the options and their pros and cons is the essential first step.

References:

PDFs downloadable from
http://mfschober.net/Site/PUBLICATIONS.html

Clark, H.H., & Schober, M.F. (1991). Asking questions and influencing answers. In J.M. Tanur (Ed.), Questions about questions: Inquiries into the cognitive bases of surveys (pp. 15-48). New York: Russell Sage Foundation.

Conrad, F.G., Schober, M.F., & Schwarz, N. (2014). Pragmatic processes in survey interviewing. In T. Holtgraves (Ed.), Oxford Handbook of Language and Social Psychology (pp. 420-437). New York: Oxford University Press.

Lind, L.H., Schober, M.F., Conrad, F.G., & Reichert, H. (2013). Why do survey respondents disclose more when computers ask the questions? Public Opinion Quarterly, 77(4), 888-935. DOI: 10.1093/poq/nft038

Schober, M.F. (1999). Making sense of questions: An interactional approach. In M.G. Sirken, D.J. Hermann, S. Schechter, N. Schwarz, J.M. Tanur, & R. Tourangeau (Eds.), Cognition and survey research (pp. 77-93). New York: John Wiley & Sons.

Schober, M.F. (2005). Conceptual alignment in conversation. In B.F. Malle & S.D. Hodges (Eds.), Other minds: How humans bridge the divide between self and others (pp. 239-252). New York: Guilford Press.

Schober, M.F., & Conrad, F.G. (2002). A collaborative view of standardized survey interviews. In D. Maynard, H. Houtkoop-Steenstra, N.C. Schaeffer, & J. van der Zouwen (Eds.), Standardization and tacit knowledge: Interaction and practice in the survey interview (pp. 67-94). New York: John Wiley & Sons.

Schober, M.F., Conrad, F.G., Dijkstra, W., & Ongena, Y.P. (2012). Disfluencies and gaze aversion in unreliable responses to survey questions. Journal of Official Statistics, 28(4), 555-582.

Schober, M.F., Conrad, F.G., Antoun, C., Ehlen, P., Fail, S., Hupp, A.L., Johnston, M., Vickers, L., Yan, H., & Zhang, C. (under review). Precision and disclosure in text and voice interviews on smartphones.

Schober, M.F., Conrad, F.G., & Fricker, S.S. (2004). Misunderstanding standardized language in research interviews. Applied Cognitive Psychology, 18, 169-188.

Schober, M.F., & Spiro, N. (2014). Jazz improvisers’ shared understanding: A case study. Frontiers in Psychology: Cognitive Science 5:808. DOI: 10.3389/fpsyg.2014.00808

A Groundbreaking New Study
Here focused on the shared understanding between Jazz Improvisers…

http://journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00808/full

ny library

Spam prevention powered by Akismet