Basic field research pitfalls that sabotage our research
Isn't research exciting? We can help identity problems that are perhaps not apparent on the surface and yet exist, or are likely to arise in the future; at the same time, we also help practitioners arrive at solutions. It is quite a fulfilling job. Nonetheless, have you ever wondered about the consequences of a perfectly well planned but ill-designed research?
Recently, a practitioner wondered why their agricultural innovation initiative failed to take off. We researched and found that more than 50% of the initiative’s target population were landless laborers, and they did not care about sophisticated agricultural innovation when they did not have any land. “We had conducted a baseline survey, and were aware that they were landless laborers, but we thought they would be ready if provided with an opportunity,” expressed the practitioner.
This is a perfect example of a million-dollar social development program gone wrong due to the most elementary mistake- not identifying and defining the true research problem. “Will landless laborers adopt the agricultural innovation if they are provided with loan to lease land?” or “Should we introduce this initiative now when majority of our target market is landless laborers?” would have saved this program.
Many times, we get so excited about the notion of conducting research that we forget to invest more time on clearly defining the research problem. No matter how rigorous our sampling strategy, research tools and data collection procedures are, all the efforts, time and money spent are wasted if the research problem is misunderstood or ill defined. As experts say, “A serious mistake is not made as a result of wrong answers but because of asking the wrong questions at the wrong time.”
I’d like to emphasize on “research at the wrong time” part. Take this as an example, you are asked to understand the financial behavior of consumers. How misleading the findings on saving and spending habits would be if you conduct this research during shopping seasons, such as Diwali, Eid, or Christmas.
Another example is if you are to conduct a research to understand rural women's knowledge of reproductive rights and incidences of domestic violence, will you trust your findings if all women were interviewed in front of their husband? In general, rural women do not open up about domestic issues in front of their husband.
In short, we need to be aware of respondents’ lifestyle, culture, and priorities when we decide on the timing and place of our research.
In one of the gender-based studies, we designed two modules, one for the male head of the household, and one for female. While a male surveyor kept male respondents busy asking about agriculture and other livelihoods strategies related questions, a female surveyor built rapport with female respondents, and asked about their reproductive rights and other sensitive issues. The majority of women reported of domestic violence; not informing their husband about precautions they were taking to plan or prevent pregnancy; and not having any financial decision making autonomy. We could have never got such sensitive data if we had not been a little innovative about our research approach.
Lastly, how surveyors are asking questions makes a huge difference. In a hot summer afternoon, somewhere in a remote village, it is very likely that surveyors choose a respondent who is readily available (not necessarily randomly selected). While we have a rigorous monitoring system to avoid these errors, sometimes surveyors' technique of asking might produce wrong data.
For example, many times, respondents are unwilling to provide accurate information, particularly when you ask about money. As a researcher, we must foresee such snags. Once, a client asked us to find the crop productivity in a certain study area. The questions were difficult, as we were asking respondents to remember the quantity of crops, selling rate and income they made in the previous year. Despite going through our standard training session, each surveyor's different technique of probing made a huge variation in answers. The first set of initial data was a disaster. We re-trained and monitored surveyors on probing technique again. After all surveyors asked the question in the same way, data turned out well.
In closing, while conducting research can be very exciting, it is also vulnerable to many pitfalls. As a researcher, it is our responsibility to continuously improve the quality of our research. In order to do so, we must learn and stay mindful of possible errors, and take timely action to prevent them from creeping into our studies.
Finally, no matter how many years you have spent designing research studies, going back to Research 101 and reading the basics, will always do you good.