Before going into the field with the interviewers, every questionnaire should be tested on respondents similar to those who will be interviewed in the actual survey. On the basis of this trial run, flaws in the questionnaire can be identified and necessary adjustments made.
While it might sound Orwellian in the extreme, most pilot studies are done in a small room with a one way mirror so that one can watch the actual interview, as well as see and hear exactly what goes on with regard to the questions, answers and body language.
This provide an excellent opportunity to check whether:
- instructions are clear to the interviewer;
- questions are clear to both the interviewer and respondent;
- questions read well;
- questions make the person uncomfortable or anxious;
- respondents tire and give less thought to their answers, and at what point this occurs;
- questions gather the types of answers in which we are really interested.
In testing questions on social identity, for example, IDASA found that the wording, 'What do you call yourself?', was generating some very personal labels, such as 'nice person' and 'open minded', which was not the information being sought. So, it was necessary to provide some sort of context in which the person was to answer the question. So the question was changed to 'Thinking of all the groups in South Africa...' (a listing of various types of groups was provided) 'which group would you consider yourself belonging to first and foremost?'
Pilot testing usually leads one back to the drawing board to rewrite at least some questions, or to delete some questions. If it appears from the pilot test that the questionnaire is too long, the conceptual framework will need to be revisited. When push comes to shove, some decision will need to be made as to whether to drop whole concept areas from the survey that may be interesting but not vital or to drop one or two questions from each concept area.
Translation
In a multilingual society, it is imperative that every respondent is able to answer questions in the language they feel most comfortable.
While it is extremely time consuming, the best way to ensure that questionnaires mean what they are intended to mean after translation is by using the 'double blind' method. One set of linguists takes the original questionnaire and translates it into the desired languages. Then, a separate set of linguists take those versions and translate them back into English (or the original language).
At that point, the re-translated version needs to be checked against the original language. Any differences need to be reconciled by finding another word in either the original or the translated language that better expresses the key concept. Note, however, that if the original language is changed, all the other translations need to be revised accordingly.
Yet good translators do not simply translate every word. They must know when respondents may be used to hearing key words in a another language, such as 'parliament', so that the proper term can be used.
Interview Method
The method of interview is key. Responses to survey questions are not necessarily independent of the way in which they are obtained.
Phone
An increasingly popular method is to contact respondents by phone. Telephone surveys are often cheaper, because they do not require interviewers to travel all over the country or region to people's houses, and they may be quicker.
A major problem is presented, however, by actual rates of telephone ownership. Even in the United States, it has been estimated as late as the early 1990s that five percent of the national population (and ten percent in some states) still did not have a telephone.
The big problem is that phone ownership is not random. It is highly associated with household income. Those who do not have phones are extremely likely to have social and political views that differ sharply from those who have telephones. In developing countries, the low and highly uneven incidence of telephone ownership makes nationally representative surveys impossible, significantly under-representing lower income households.
In the U.S., the Gallup organisation once estimated that pre-electoral phone surveys were five to six points more favourable to Republican candidates than Democrats. A striking example in South Africa occurred 1992 when the Human Sciences Research Council (HSRC) used a telephone survey to project that F.W. de Klerk had more support than Nelson Mandela in a
hypothetical presidential race. The HSRC observed that they had correctly weighted the final responses according to the correct racial proportions of the country. What they forgot, however, is that those Africans with telephones were not very typical of Africans in general.
Telephone surveys also make it easier for people to opt out of the survey and, therefore, out of the sample. As noted below, it is important not to allow people to self-select themselves out of the sample. Moreover, telephone interviews rarely establish the rapport possible in personal interviews necessary to allow interviewers to lead respondents into controversial topics.
People are simply more reluctant to express negative attitudes with unseen strangers. Because telephone samples are also likely to include alienated people, they are generally biased toward less negative data. In some situations, however, the lack of direct contact may render phone surveys preferable.
Post/Mail
In a survey by mail, a questionnaire is simply sent to the respondent through the post. The questionnaire is then self-administered. Because they involve relatively minor labour and postage costs, they tend to be relatively cost effective.
On the other hand, mail surveys tend to get a very low response rate. To get a one-third to one-fourth return rate is considered a major success, and even these rates require a good deal of work contacting recipients a second or third time to coax them to fill it out and return it. Other incentives may be offered, such as prizes or chances to win prizes.
In some places, the effectiveness of postal surveys is also greatly hampered by both the quality of the postal service and high levels of illiteracy. For these and other reasons, postal surveys are usually only carried out on specific targeted populations, most often very educated audiences, top-level managers, or 'elite decision makers.'
Personal Interviews
Personal interviews can establish a relationship of trust with the respondent, allowing more sensitive questions and more in-depth answers. Because respondents can actually see their interviewer in face-to-face interviews, interviewer characteristics such as race and gender may influence the willingness of respondents to offer socially undesirable responses on issues of race and gender. Thus, in a place like South Africa, survey companies usually try to make sure that
interviewers are of the same race as their respondent. If the survey was on sex, or gender issues, an effort would be maqde to ensure that interviewers were the same sex as respondents
Personal interviews tend to be expensive, however, because of labour and travel costs. In addition, personal interviews face many logistical hurdles not encountered in the other methods. Most simply, there is the prospect of getting past someone's front gate, let alone through their door. Especially in South Africa, 'bad' neighbourhoods, apartment building security systems and other minor problems, such as Rottweiller or Doberman Pinscher dogs, often prevent surveyors from contacting everyone in the sample.
Again, if unchecked, 'non-response' (those who cannot be reached at home or who refuse to be interviewed) can play havoc with a sample because these people almost always differ from the general population in attributes and attitudes.