As researchers, we also have deadlines and KPIs. It can be so tempting to take an incomplete or subpar piece of data or analysis and say “it’s done!” because we know how much extra effort is required to start over. How do we avoid the sunk cost bias (or fallacy) in our research?
A: First of all, it’s important to us that we don’t compromise the quality of our research to meet a specific budget. We do often work on tight deadlines, but we have a very streamlined process that helps us make a good plan to meet those deadlines.
G: We also have many checks built-in along the way. You can’t just do quality control at the end and see what you got. We check and validate at every step, making sure our research design is performing well, making sure our participants qualify for the target group, making sure our technology is properly capturing all the inputs. The more QA you can do at the beginning, the lower the risk of doubling down on your losses by going further with a bad process.
A: At the end of the day, it’s up to us to assess the quality. And that’s the top priority. So if we need to start over, that’s what we do.
G: Exactly. One thing that comes up a lot is the concept of “testing.” It’s actually not what we do. We never go into a session with a hypothesis or preconceived notion. Our job is to translate the business goals (which often include specific hypotheses) into explorative customer sessions. When that process works, we can deliver much more interesting findings, including unexpected discoveries.
Any final thoughts on our research best practices or anything else you want to share?
A: I think it’s just worth mentioning that here at UserTribe we focus on qualitative research. In some ways that leaves us more susceptible to bias as qualitative research is inherently more subjective. We do it because we think qualitative data gives us a much greater depth of insight and provides a much richer context for understanding people.