User Research

Interview: How We Avoid Bias in Customer Research

Authored by Rachel Kador

Gabrielle Henriksen (left) and Anna Steensig, UserTribe Customer Research Specialists.

Last week we shared our infographic of the most prevalent biases to be aware of when conducting in-house customer research. Of course, these biases don’t only exist for in-house researchers. We are constantly aware of these cognitive traps and working to minimize their effects in our research.

I recently sat down with two of UserTribe’s research specialists to learn how we address cognitive bias in our work.

Anna Steensig and Gabrielle Henriksen both work in UserTribe’s Design & Deliver team which oversees all of our research methodology and analysis. They shared with me our best practices and primary considerations for avoiding cognitive bias in our customer sessions.

1. Anchoring Bias

Anchoring bias seems like something that’s really hard to avoid. No matter what, some information is presented before others, so how do we ensure that we’re not inadvertently skewing our results?

Anna: This one is actually pretty straightforward. Whenever we do a study that asks people to compare things — concepts, sketches, prototypes, whatever — we always split the groups and present them in different orders.

Gabby: Right, it’s important to introduce an element of randomization into the session so we can avoid this bias in our results altogether.

“Well, we have one general rule which we always try to follow. That is, start general and then get specific.”

— Anna Steensig, UserTribe Research Specialist

2. Question Order Bias

I guess this is similar to the anchoring bias, but how do we decide which order to present questions in to avoid bias?

Anna: Well, we have one general rule which we always try to follow. That is, start general and then get specific. This is especially true when you’re trying to understand what people themselves actually think is most important. If it’s really important, people will often bring it up themselves without any additional prompting.

Gabby: Exactly. And if you do need to get really specific, we recommend working with a trained moderator who will be able to dive deeper and pose those specific questions in a smart way. These people are great because they can probe deeper in a natural way, without making it sound too scripted.

3. Acquiescence Bias

This is a bias I think we’ve all experienced — we all (generally) want to be positive and encouraging, especially when talking about something that’s important to the other person. How do we make sure people are sharing their honest thoughts during our research?

Gabby: One thing we always emphasize in our studies is that we are neutral facilitators. By adding a third party into the research process, you automatically get that distance which helps to reduce this bias. It’s easier for a person to see, for example, that we have no stake in the outcome, we just want their honest feedback.

Anna: Right — we always say there are no right or wrong answers. We also try to set up a context in which people do not feel pressure to answer one way or another. Doing remote tests, where a person is sitting at home in their own space, is one way to avoid this bias.

Gabby: I think another important point is to really focus on what people do and not just on what they say. For example in some navigation tests, people will say that a website was very easy to use, when in fact they couldn’t complete the task at hand. In that case we analyze their behavior, not their stated opinion

Anna: On the same point — we try to avoid yes or no questions. If you ask, “Did you find this process confusing?” many people will say no even if they had trouble with it. But if you ask, “What about this process was confusing?” you’ll get a much more thoughtful reply.

“I think another important point is to really focus on what people do and not just on what they say.”

— Gabrielle Henriksen, UserTribe Research Specialist

4. Social Desirability Bias

I relate to this one so much. Even if I know that my answers are anonymous, I want to present myself in the best way possible. It’s easy to rationalize that thought process. How do we address that?

Anna: This is a really classic research problem, and you see it a lot in focus groups, for example. That’s one of the main reasons we primarily conduct individual interview-style sessions. The fewer people there are to “judge” you, the more open you’ll (hopefully) be.

Gabby: Exactly — we focus a lot on creating a comfortable space for our participants. That could mean a remote, unmoderated session in their own home where they’re comfortable. It could include a trained moderator who is able to create a good rapport with people.

Anna: Methodologically, whenever we’re doing a test of something that is a bit personal, like a medical condition or something like that, we always start with easy, low-stakes questions. For example we’ll begin a session by asking questions like, “Describe your typical morning routine,” to get people comfortable with sharing some low pressure details about their lives before going deeper.

5. Habituation

This one reminds me so much of tests I used to take in school. Those endless standardized tests that just make your eyes gloss over. How do we make sure people stay engaged all the way through the session?

Gabby: There are a couple of best practices we always follow. For unmoderated sessions, we have a pretty strict time limit of 30 minutes. We’ve just seen from experience that anything longer leads to people becoming less engaged with the topic.

Anna: Yes — especially with certain demographics, keeping it short and sweet is really important. I’m thinking of some tests we ran with children as young as 6 years old; you really need to tailor the study to meet their needs, and that usually means keeping it really short (having a parent as a moderator also helps).

Gabby: We also try to ask as many open-ended questions as possible. By letting people speak into their own interests, they’re obviously going to stay more engaged. Habituation sets in after some repetition, so keeping the number of questions low and varied obviously helps.

Anna: One thing to mention is that if you do see signs of Habituation — for example people stop responding fully, they gloss over questions or give perfunctory answers — that can actually be a really valuable result. This can show you where your product or service is losing interest for your customers. That in and of itself is useful knowledge and we always include that in our results.

“Our focus is to go into the blind spots and be open to new discoveries.”

— Anna Steensig, UserTribe Research Specialist

6. Confirmation Bias

This is a big one! We hear about this one a lot in the news (“fake news”) and see it when people discuss conspiracy theories and things like that. But it’s also really hard to avoid, especially if you’re very well-versed in a subject. When we’re analyzing results, how do we make sure we’re avoiding confirmation bias?

Anna: This is a big reason why some clients hire us. They want to avoid the bias that can come with doing research in-house. Sometimes an in-house researcher will feel some pressure to produce specific results and that can, intentionally or not, influence how they interpret data. By outsourcing the analysis you can avoid that sort of bias.

Gabby: Yeah, we have had people come to us and ask for research to confirm a theory they have. That’s not how we work. Sherlock Holmes has this great quote where he says, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” We take that to heart.

Anna: Our focus is to go into the blind spots and be open to new discoveries. Instead of trying to get people to validate a specific hypothesis, we use that as a launching point for more open-ended inquiry. It may not be intuitive, but we find this method not only answers the original question (“How do my customers feel about this thing?”) but also provides additional information we may never have thought to ask for.

7. Framing Bias

As a researcher, this one seems easiest to avoid. Like, it seems like this could be used by someone with ulterior motives to skew the results. But is there another part to this? How do we address it in our work?

Gabby: I think at its most straightforward, this can be used to manipulate research if, for example, you’re doing the research as a CYA tactic. Internal pressure in a company can lead to some misleading questions that skew the results. But in general, conscientious researchers are able to avoid this.

Anna: Yeah, here at UserTribe we have the advantage of being neutral researcher partners, so we are able to frame questions in a way that would minimize the risk of this bias. If we feel pressured to push the results in a certain way, we can (figuratively) push the client out of the room and focus on following best practices.

“At the end of the day, it’s up to us to assess the quality. And that’s the top priority. So if we need to start over, that’s what we do.”

— Anna Steensig, UserTribe Research Specialist

8. Cultural Bias

This is a big one for us! As a company that works with lots of international projects, how can we account for cultural differences in our work?

Anna: Well, we have a lot of infrastructure in place to help us with this. First of all, our policy is always to use as many local people as possible. We design and manage each session in-house but work with local translators, recruiters, moderators, and anthropologists to make sure we have a good cultural understanding.

Gabby: Yeah, it’s important they’re actually local. It’s not just a question of translation. We work with lots of languages (English, Spanish, Chinese, etc.) that exist within many cultures and subcultures. It’s so beneficial to have someone from that specific culture to help you translate and parse what people are saying.

9. Sunk Cost Bias

As researchers, we also have deadlines and KPIs. It can be so tempting to take an incomplete or subpar piece of data or analysis and say “it’s done!” because we know how much extra effort is required to start over. How do we avoid the sunk cost bias (or fallacy) in our research?

Anna: First of it, it’s important to us that we don’t compromise the quality of our research to meet a specific budget. We do often work on tight deadlines, but we have a very streamlined process that helps us make a good plan to meet those deadlines.

Gabby: We also have many checks built in along the way. You can’t just do quality control at the end and see what you got. We check and validate at every step, making sure our research design is performing well, making sure our participants qualify for the target group, making sure our technology is properly capturing all the inputs. The more QA you can do at the beginning, the lower the risk of doubling down on your losses by going further with a bad process.

Anna: At the end of the day, it’s up to us to assess the quality. And that’s the top priority. So if we need to start over, that’s what we do.

Final Thoughts

Any final thoughts on our research best practices or anything else you want to share?

Anna: I think it’s just worth mentioning that here at UserTribe we focus on qualitative research. In some ways that leaves us more susceptible to bias as qualitative research is inherently more subjective. We do it because we think qualitative data gives us much greater depth of insight and provides a much richer context for understanding people.

Gabby: Exactly. One thing that comes up a lot is the concept of “testing.” It’s actually not what we do. We never go into a session with a hypothesis or preconceived notion. Our job is to translate the business goals (which often include specific hypotheses) into explorative customer sessions. When that process works, we can deliver much more interesting findings, including unexpected discoveries.

Rachel Kador

Rachel Kador

Head of Marketing, Newsletter Curatress, Fast Typer