Abstract

Background One of the main primary data collection instruments in social, health and epidemiological research is the survey questionnaire. Modes of data collection by questionnaire differ in several ways, including the method of contacting respondents, the medium of delivering the questionnaire to respondents, and the administration of the questions. These are likely to have different effects on the quality of the data collected.

Methods This paper is based on a narrative review of systematic and non-systematic searches of the literature on the effects of mode of questionnaire administration on data quality.

Results Within different modes of questionnaire administration, there were many documented potential, biasing influences on the responses obtained. These were greatest between different types of mode (e.g. self-administered versus interview modes), rather than within modes. It can be difficult to separate out the effects of the different influences, at different levels.

Conclusions The biasing effects of mode of questionnaire administration has important implications for research methodology, the validity of the results of research, and for the soundness of public policy developed from evidence using questionnaire-based research. All users of questionnaires need to be aware of these potential effects on their data.

Introduction

One of the main primary data collection instruments in social, health and epidemiological research is the survey questionnaire. Modes of data collection by questionnaire vary in the method of contacting respondents, in the vehicle of delivering of the questionnaire, and in the way in which questions are administered. These variations can have different effects on the accuracy and quality of the data obtained. There are few publications on the topic, which are based on systematic, interdisciplinary searches of the literature. It is timely to attempt a focused search and report on the literature pertinent to population health, patients and health care.

Modes of collecting questionnaire data

Surveys can be conducted in different settings, and different questionnaire methods involve either paper and pencil, electronic (computer mouse/keyboard) or telephone key pad vehicles for collecting the data. These modes differ in several ways at differentlevels(see Box 1.)

Box 1.

Differences between modes of data collection by questionnaire.

Modes differ:
1. In the method of initially contacting the respondents, ranging from an initial letter of introduction giving notice of the study,
personal face-to-face, email or telephone contact at the same time as the provision of, or administration of, the questionnaire,
depending on its mode of administration.
2. In the medium of delivering the questionnaire to respondents: in person, by telephone, by post or electronically (e.g. by email).
3. In the actual administration of the questions.
Interview modes:
(a) verbal – interviewers, face-to-face, using traditional paper and pencil interview (PAPI) questionnaires;
(b) verbal – interviewers, face-to-face, using computer assisted personal interviewing methods via personal computer (pc) or lap
top pc questionnaire programmes (CAPI);
(c) verbal – interviewers, by telephone, using paper or electronic computer assisted questionnaires (CATI).
Self-administration modes:
(a) traditional paper and pencil self-administration ‘interview’ methods (PAPI) by post, or handing paper questionnaires to
people in person and asking them to complete them by hand and return them to the researcher;
(b) computer-assisted (electronic) self-administrat ion ‘interview’ methods (CASI) by automated electronic, including audio
computer-assisted, methods;
(c) self-administration via interactive voice response methods with automated computer-assisted telephone programmes (ACASI).
Modes differ:
1. In the method of initially contacting the respondents, ranging from an initial letter of introduction giving notice of the study,
personal face-to-face, email or telephone contact at the same time as the provision of, or administration of, the questionnaire,
depending on its mode of administration.
2. In the medium of delivering the questionnaire to respondents: in person, by telephone, by post or electronically (e.g. by email).
3. In the actual administration of the questions.
Interview modes:
(a) verbal – interviewers, face-to-face, using traditional paper and pencil interview (PAPI) questionnaires;
(b) verbal – interviewers, face-to-face, using computer assisted personal interviewing methods via personal computer (pc) or lap
top pc questionnaire programmes (CAPI);
(c) verbal – interviewers, by telephone, using paper or electronic computer assisted questionnaires (CATI).
Self-administration modes:
(a) traditional paper and pencil self-administration ‘interview’ methods (PAPI) by post, or handing paper questionnaires to
people in person and asking them to complete them by hand and return them to the researcher;
(b) computer-assisted (electronic) self-administrat ion ‘interview’ methods (CASI) by automated electronic, including audio
computer-assisted, methods;
(c) self-administration via interactive voice response methods with automated computer-assisted telephone programmes (ACASI).
Box 1.

Differences between modes of data collection by questionnaire.

Modes differ:
1. In the method of initially contacting the respondents, ranging from an initial letter of introduction giving notice of the study,
personal face-to-face, email or telephone contact at the same time as the provision of, or administration of, the questionnaire,
depending on its mode of administration.
2. In the medium of delivering the questionnaire to respondents: in person, by telephone, by post or electronically (e.g. by email).
3. In the actual administration of the questions.
Interview modes:
(a) verbal – interviewers, face-to-face, using traditional paper and pencil interview (PAPI) questionnaires;
(b) verbal – interviewers, face-to-face, using computer assisted personal interviewing methods via personal computer (pc) or lap
top pc questionnaire programmes (CAPI);
(c) verbal – interviewers, by telephone, using paper or electronic computer assisted questionnaires (CATI).
Self-administration modes:
(a) traditional paper and pencil self-administration ‘interview’ methods (PAPI) by post, or handing paper questionnaires to
people in person and asking them to complete them by hand and return them to the researcher;
(b) computer-assisted (electronic) self-administrat ion ‘interview’ methods (CASI) by automated electronic, including audio
computer-assisted, methods;
(c) self-administration via interactive voice response methods with automated computer-assisted telephone programmes (ACASI).
Modes differ:
1. In the method of initially contacting the respondents, ranging from an initial letter of introduction giving notice of the study,
personal face-to-face, email or telephone contact at the same time as the provision of, or administration of, the questionnaire,
depending on its mode of administration.
2. In the medium of delivering the questionnaire to respondents: in person, by telephone, by post or electronically (e.g. by email).
3. In the actual administration of the questions.
Interview modes:
(a) verbal – interviewers, face-to-face, using traditional paper and pencil interview (PAPI) questionnaires;
(b) verbal – interviewers, face-to-face, using computer assisted personal interviewing methods via personal computer (pc) or lap
top pc questionnaire programmes (CAPI);
(c) verbal – interviewers, by telephone, using paper or electronic computer assisted questionnaires (CATI).
Self-administration modes:
(a) traditional paper and pencil self-administration ‘interview’ methods (PAPI) by post, or handing paper questionnaires to
people in person and asking them to complete them by hand and return them to the researcher;
(b) computer-assisted (electronic) self-administrat ion ‘interview’ methods (CASI) by automated electronic, including audio
computer-assisted, methods;
(c) self-administration via interactive voice response methods with automated computer-assisted telephone programmes (ACASI).

Thus, within any mode of administration, there are many potential influences on responses. These differences, at different levels, can make it difficult to separate out the effects of each on the quality of the data obtained. Even minor changes in question wording, question order or response format can result in differences in the type of response obtained,1,2 but can be difficult to separate out from other effects of different modes of administration.3

In addition to the traditional range of paper and pencil methods, there is increasing academic interest in tools commonly used in market and public opinion research, e.g. the use of computer assisted face-to-face interviewing, computer-assisted telephone interviewing, self-administered computer methods, audio computer-assisted self-administered interviewing, and interactive voice response telephone methods (see Box 2). The range of electronic methods has been described by Tourangeau et al.4

Box 2.

Electronic and telephone techniques.

1. Computer assisted face-to-face or telephone interviewing: the questionnaire is in the form of a computer programme that displays the items to the interviewer on a computer screen (on a laptop in the case of face-to-face interviewing). The interviewer reads the prompted questions to the respondent, and enters their responses by pressing the appropriate keys on the keyboard.
2. Self-administered computer methods: respondents themselves complete questionnaires electronically using a computer mouse/ keyboard, either at a computer within an organisational facility (e.g. psychology laboratory, clinical setting), or at home on a personal computer connected to the internet or via email (either a questionnaire is emailed to a respondent to complete or the respondents connects to a website where a programme administers the questionnaire).
3. Audio computer-assisted self-administered interviewing (ACASI): the programme displays questions on the computer screen and instructs the respondents audibly (e.g. via headphones).
4. Interactive voice response methods with automated telephone lines: a computer plays a recording of the questions over the telephone and respondents indicate their responses by pressing indicated handset keys.
1. Computer assisted face-to-face or telephone interviewing: the questionnaire is in the form of a computer programme that displays the items to the interviewer on a computer screen (on a laptop in the case of face-to-face interviewing). The interviewer reads the prompted questions to the respondent, and enters their responses by pressing the appropriate keys on the keyboard.
2. Self-administered computer methods: respondents themselves complete questionnaires electronically using a computer mouse/ keyboard, either at a computer within an organisational facility (e.g. psychology laboratory, clinical setting), or at home on a personal computer connected to the internet or via email (either a questionnaire is emailed to a respondent to complete or the respondents connects to a website where a programme administers the questionnaire).
3. Audio computer-assisted self-administered interviewing (ACASI): the programme displays questions on the computer screen and instructs the respondents audibly (e.g. via headphones).
4. Interactive voice response methods with automated telephone lines: a computer plays a recording of the questions over the telephone and respondents indicate their responses by pressing indicated handset keys.
Box 2.

Electronic and telephone techniques.

1. Computer assisted face-to-face or telephone interviewing: the questionnaire is in the form of a computer programme that displays the items to the interviewer on a computer screen (on a laptop in the case of face-to-face interviewing). The interviewer reads the prompted questions to the respondent, and enters their responses by pressing the appropriate keys on the keyboard.
2. Self-administered computer methods: respondents themselves complete questionnaires electronically using a computer mouse/ keyboard, either at a computer within an organisational facility (e.g. psychology laboratory, clinical setting), or at home on a personal computer connected to the internet or via email (either a questionnaire is emailed to a respondent to complete or the respondents connects to a website where a programme administers the questionnaire).
3. Audio computer-assisted self-administered interviewing (ACASI): the programme displays questions on the computer screen and instructs the respondents audibly (e.g. via headphones).
4. Interactive voice response methods with automated telephone lines: a computer plays a recording of the questions over the telephone and respondents indicate their responses by pressing indicated handset keys.
1. Computer assisted face-to-face or telephone interviewing: the questionnaire is in the form of a computer programme that displays the items to the interviewer on a computer screen (on a laptop in the case of face-to-face interviewing). The interviewer reads the prompted questions to the respondent, and enters their responses by pressing the appropriate keys on the keyboard.
2. Self-administered computer methods: respondents themselves complete questionnaires electronically using a computer mouse/ keyboard, either at a computer within an organisational facility (e.g. psychology laboratory, clinical setting), or at home on a personal computer connected to the internet or via email (either a questionnaire is emailed to a respondent to complete or the respondents connects to a website where a programme administers the questionnaire).
3. Audio computer-assisted self-administered interviewing (ACASI): the programme displays questions on the computer screen and instructs the respondents audibly (e.g. via headphones).
4. Interactive voice response methods with automated telephone lines: a computer plays a recording of the questions over the telephone and respondents indicate their responses by pressing indicated handset keys.

Burden on respondents

There are at least four steps involved in answering questionnaires, which make cognitive demands on respondents: comprehension of the question, recall of requested information from memory, evaluation of the link between the retrieved information and the question, and communication of the response.3,5 It is likely, then, that the channel of questionnaire presentation (e.g. auditory, oral, visual) affects the cognitive burden placed on respondents, especially the demand for literacy in the case of visual self-administration methods. And, as each mode inevitably imposes different cognitive requirements on respondents, and varies in the amount of privacy and anonymity they afford respondents, these can affect the process of responding to questions, and thus on the quality of the data.

Probably the least burdensome method is the personal, face-to-face interview (auditory channel) as this only requires the respondent to speak the same language in which the questions are asked, and to have basic verbal and listening skills. No reading skills are required (unless written materials for the respondent are contained within the interview). A friendly, motivating interviewer can increase response and item response rates, maintain motivation with longer questionnaires, probe for responses, clarify ambiguous questions, help respondents with enlarged show cards of response choice options, use memory jogging techniques for aiding recall of events and behaviour, and control the order of the questions. Interviewers can also be trained to follow complex question routing and skipping instructions.

In contrast, telephone interviews make greater auditory demands and may be burdensome to respondents. Telephone interviews require basic verbal and language skills, and also require access to, or ownership of, a telephone. The most burdensome modes are likely to be visual and written methods of self-administration, as these demand that respondents are literate in reading the language/s of the survey, that they do not have visual impairments and have the dexterity (e.g. of wrist, fingers) to complete the questions. They require respondents to tick a box on a paper questionnaire, press an electronic key, or key on a touchtone telephone handset to indicate their response: respondents are required to read or listen, recognize numbers and write/key answers accurately. Respondents also need the ability to follow routing instructions.

Electronic methods require access to a computer and/or internet facilities (whether via an interviewer with a lap top personal computer (PC), or facilities in an office, clinic or home setting), basic computer literacy, and also familiarity with numbers and keyboards. They have literacy requirements in relation to reading the questions and replying, and can also have auditory requirements (ACASI). However, electronic programmes can be designed to require a limited range of keys, they have been documented in individual experiments and in reviews to have more complete item response rates than the various paper and pencil methods.6

Methods

Methods of systematic electronic literature search

This paper is based on a narrative review of the literature on the effects of mode of questionnaire administration on data quality. The uniqueness of this review is that the search of the literature was conducted in a systematic fashion, including databases representing different disciplines (medicine, psychology and sociology), and supplemented with a wide non-systematic search of the grey literature. The databases searched were Medline, PsycINFO and Social Science Citation Index (via the Web of knowledge). All years were searched. The key terms were limited to ‘mode of administration’ and ‘data collection bias’ in order to focus the search. Both generic and health-specific literature was included (e.g. generic methodology papers and papers on the effects of mode of administering health status questionnaires, such as the Short Form-36).1 Abstracts of non-English references were included where translations were available on-line. Citations that included references were scanned and new references included where they met the review criteria. Studies were not included or excluded on grounds of methodological quality, as is the norm in systematic reviews of clinical trials. Although experimental techniques (e.g. randomization of respondents between modes of administration) are ideal methods, these are not frequently employed in methodological research. Full papers were obtained for each English language abstract included. Excluded papers related to standard comparisons of the validity of different questionnaire lengths (e.g. short versus long forms of health status questionnaires), and different quality of life measures; test–retest reliability checks, inter–interviewer bias, different question wording, or irrelevant papers (e.g. mode of drug administration).

Methods of grey, non-electronic literature search

In view of the difficulty of identifying relevant articles by standard methods of systematic reviews, and the cross-disciplinary nature of the topic, a search of the grey and non-electronic literature was conducted by: contacting published authors in the field; making personal contact with main government, independent and university social survey organizations in the UK, Europe and the USA; and checking the cited references in the literature elicited by the systematic review. Textbooks known to the investigator, with relevant sections on mode of administration, were also included. This exercise found that most of the key words used in electronic databases were inadequate to permit easy identification of specific papers on mode of administration effects. One key methodological experiment, comparing electronic and paper and pen self-completion methods, which were embedded in interview situations, used the key words ‘epidemiology, knowledge, attitude, practice studies, sexual behaviour, surveillance’, and none indicated mode of administration biases.6

Results

The number of unduplicated publications identified and included in the review are displayed in Table 1. This yielded 73 out of 382 (19 per cent) potential papers that were relevant for review.

Table 1

Sources searched by number of elicited references

Database*Total papers foundTotal papers included after removal of duplicates and exclusions
Medline33330
PsycINFO and Social Science Citation Index†32
Other sources/citations†4641
Total38273
Database*Total papers foundTotal papers included after removal of duplicates and exclusions
Medline33330
PsycINFO and Social Science Citation Index†32
Other sources/citations†4641
Total38273
*

Medline searched via National Library of Medicine, Social Science Citation Index searched via ISI Web of Knowledge, Social Science, PsycINFO searched via American Psychological Association.

PsycINFO and Social Science totals relate to new references not already listed by Medline. Other source totals relate to new references not already listed by Medline or PsycINFO.

Table 1

Sources searched by number of elicited references

Database*Total papers foundTotal papers included after removal of duplicates and exclusions
Medline33330
PsycINFO and Social Science Citation Index†32
Other sources/citations†4641
Total38273
Database*Total papers foundTotal papers included after removal of duplicates and exclusions
Medline33330
PsycINFO and Social Science Citation Index†32
Other sources/citations†4641
Total38273
*

Medline searched via National Library of Medicine, Social Science Citation Index searched via ISI Web of Knowledge, Social Science, PsycINFO searched via American Psychological Association.

PsycINFO and Social Science totals relate to new references not already listed by Medline. Other source totals relate to new references not already listed by Medline or PsycINFO.

Much of the broader research on the effect of mode of questionnaire administration on data quality is from the USA, and published in internal reports of large survey organizations, in specialized books and social science journals. And most of this research consisted of comparisons of separate samples, rather than controlled, experimental designs. The most notable systematically conducted reviews include those by Tourangeau et al.4 and by De Leeuw and van der Zouwen,8 based on meta-analyses of data quality in telephone and face-to-face surveys.

Effects of mode of administration on data quality

Data quality is a vague concept, and there is no agreed gold standard. It could be defined in terms of survey response rates, questionnaire item response rates, the accuracy of responses, absence of bias, and completeness of the information obtained from respondents. De Leeuw and van der Zouwen8 listed five main indicators for data quality, in addition to a sixth dependent variable (response rate – the number of completed interviews divided by the total number of eligible sample units) (see Box 3.)

Box 3.

De Leeuw and van der Zouwen’s8 five main indicators for data quality.

1. Accuracy, or validity, of response (validity) (checks can be made against a ‘true value’ only when validating information is available).
2. Absence of social desirability bias – when the answer is determined by socially acceptable norms, rather than the true situation (inversely proportional to the number of socially desirably answers for a particular question).
3. Item response (inversely proportional to the number of missing responses in the questionnaire).
4. Amount of information (indicated by the number of responses to open-ended questions or checklists).
5. Similarity of response distributions obtained by different modes of questionnaire administration (indicated by lack of significant differences between the estimates obtained using different modes of administration).
1. Accuracy, or validity, of response (validity) (checks can be made against a ‘true value’ only when validating information is available).
2. Absence of social desirability bias – when the answer is determined by socially acceptable norms, rather than the true situation (inversely proportional to the number of socially desirably answers for a particular question).
3. Item response (inversely proportional to the number of missing responses in the questionnaire).
4. Amount of information (indicated by the number of responses to open-ended questions or checklists).
5. Similarity of response distributions obtained by different modes of questionnaire administration (indicated by lack of significant differences between the estimates obtained using different modes of administration).
Box 3.

De Leeuw and van der Zouwen’s8 five main indicators for data quality.

1. Accuracy, or validity, of response (validity) (checks can be made against a ‘true value’ only when validating information is available).
2. Absence of social desirability bias – when the answer is determined by socially acceptable norms, rather than the true situation (inversely proportional to the number of socially desirably answers for a particular question).
3. Item response (inversely proportional to the number of missing responses in the questionnaire).
4. Amount of information (indicated by the number of responses to open-ended questions or checklists).
5. Similarity of response distributions obtained by different modes of questionnaire administration (indicated by lack of significant differences between the estimates obtained using different modes of administration).
1. Accuracy, or validity, of response (validity) (checks can be made against a ‘true value’ only when validating information is available).
2. Absence of social desirability bias – when the answer is determined by socially acceptable norms, rather than the true situation (inversely proportional to the number of socially desirably answers for a particular question).
3. Item response (inversely proportional to the number of missing responses in the questionnaire).
4. Amount of information (indicated by the number of responses to open-ended questions or checklists).
5. Similarity of response distributions obtained by different modes of questionnaire administration (indicated by lack of significant differences between the estimates obtained using different modes of administration).

More broadly, these sources of error in surveys can be summarized as (i) non-measurement errors: survey design, sampling frame and sampling, non-response and item non-response; and (ii) measurement errors: survey instrument and data collection processes. Mode of questionnaire administration has effects on elements of both of these sources. The potential effects derived from the literature cited in this paper are summarized in Table 2, although this should be interpreted with caution, given that the literature is not always consistent, rarely based on experimental designs, and often varying by topic. A summary of the reviewed literature is presented next under the headings of non-measurement and measurement error.

Table 2

Summary of potential biases by mode of questionnaire administration

Self-administered,
Potential forFace-to-face interviewsTelephone interviewsSelf-administered, postalprogrammed, electronic
More complete populationHighLowHighLow
coverage for sampling
Cognitive burdenLowGreatGreatGreat
Survey responseHighLowMedium – lowLow
Item response/completion of questionnaireHighLowLowLow
Question order effectsLowLowHighLow
Response-choice order effectsModerateHighHighHigh
Recall biasLowLowHighHigh
Social desirability biasHighHighLowLow
‘Yes-saying’ biasHighHighLowLow
Interviewer biasHighHigh
Length of verbal response/amount of informationHighLow
Willingness to disclose sensitive informationLowLowHighHigh
Respondents’ preferences for mode of administrationHighLowLowModerate
Self-administered,
Potential forFace-to-face interviewsTelephone interviewsSelf-administered, postalprogrammed, electronic
More complete populationHighLowHighLow
coverage for sampling
Cognitive burdenLowGreatGreatGreat
Survey responseHighLowMedium – lowLow
Item response/completion of questionnaireHighLowLowLow
Question order effectsLowLowHighLow
Response-choice order effectsModerateHighHighHigh
Recall biasLowLowHighHigh
Social desirability biasHighHighLowLow
‘Yes-saying’ biasHighHighLowLow
Interviewer biasHighHigh
Length of verbal response/amount of informationHighLow
Willingness to disclose sensitive informationLowLowHighHigh
Respondents’ preferences for mode of administrationHighLowLowModerate
Table 2

Summary of potential biases by mode of questionnaire administration

Self-administered,
Potential forFace-to-face interviewsTelephone interviewsSelf-administered, postalprogrammed, electronic
More complete populationHighLowHighLow
coverage for sampling
Cognitive burdenLowGreatGreatGreat
Survey responseHighLowMedium – lowLow
Item response/completion of questionnaireHighLowLowLow
Question order effectsLowLowHighLow
Response-choice order effectsModerateHighHighHigh
Recall biasLowLowHighHigh
Social desirability biasHighHighLowLow
‘Yes-saying’ biasHighHighLowLow
Interviewer biasHighHigh
Length of verbal response/amount of informationHighLow
Willingness to disclose sensitive informationLowLowHighHigh
Respondents’ preferences for mode of administrationHighLowLowModerate
Self-administered,
Potential forFace-to-face interviewsTelephone interviewsSelf-administered, postalprogrammed, electronic
More complete populationHighLowHighLow
coverage for sampling
Cognitive burdenLowGreatGreatGreat
Survey responseHighLowMedium – lowLow
Item response/completion of questionnaireHighLowLowLow
Question order effectsLowLowHighLow
Response-choice order effectsModerateHighHighHigh
Recall biasLowLowHighHigh
Social desirability biasHighHighLowLow
‘Yes-saying’ biasHighHighLowLow
Interviewer biasHighHigh
Length of verbal response/amount of informationHighLow
Willingness to disclose sensitive informationLowLowHighHigh
Respondents’ preferences for mode of administrationHighLowLowModerate

Non-measurement error

Coverage for sampling

All methods require up-to-date sampling or address lists prior to sampling, to ensure completeness of coverage of the target population, and each carries its own form of bias. Interview and postal surveys (outside market research) rely on complete and up-to-date lists of addresses (e.g. from post or zip code files); these may be incomplete and out of date, thus leading to sample bias. Telephone surveys generally use random digit dialling because of the number of people who choose not to be listed in telephone directories. These are also subject to sample selection bias because inclusion in the sample is limited to those with telephones and who answer them directly (i.e. rather than using answer phones and call screening facilities. Electronic surveys are limited to those with access to a personal computer, email and internet access, creating immediate sample bias, accentuated by the lack of complete lists of private email addresses. These standard issues are dealt with in most methodology textbooks.9

Response rates

Methodological research comparing different methods of administering questionnaires has focused on the issue of response rates, item response, and on methods of increasing these, in particular in relation to postal surveys.10 The main reasons for non-response include respondents’ unwillingness to participate in the study, the investigator’s inability to contact respondents (e.g. people who are out during home or telephone interview surveys) and communication barriers (e.g. literacy barriers, sensory impairments). Non-response is thus likely to be influenced by mode of questionnaire administration (e.g. people who have difficulty writing are unlikely to respond to a postal survey and hence differ in this respect to respondents).

The lower the response rate to a study, the greater the danger that the responders may differ from non-respondents in their characteristics, which affects the precision (reliability) of the survey’s population estimates, resulting in study bias, and weakening the external validity (generalizability) of the survey results. Even if the quality of the data obtained is good, a biased sample is of little value in making population estimates that represent the target population. While there is much literature reporting on differences between respondents and non-respondents in relation to individual studies, overall reviews and systematic reviews of health-related literature on the differences between responders and non-responders are inconsistent or inconclusive.11,12

Face-to-face interview surveys have long been assumed to achieve higher response rates than postal and other types of surveys. A friendly interviewer on the doorstep can be motivating, and it may be easier to convince respondents of the legitimacy of the study in person, which should increase response rates. But narrative literature reviews, up until the 1990s, indicated that postal questionnaires, with at least two reminders and sponsorship by an official or respected body, could achieve response rates equal to interviews (e.g. 85 per cent) on appropriate topics.11,13,14 But response rates across all methods have declined over the past decade (this is evident when comparing the response rates over time to the British General Household Surveys).15 A number of studies have since reported differences between response rates to telephone, face-to-face and postal questionnaires.16 Sykes and Collins17 compared three large British population surveys conducted during the 1980s: two studies of social attitudes, and a study of lifestyle, which all directly compared telephone and face-to-face interviewing. The telephone method achieved substantially lower response rates than face-to-face methods, and yielded more childless couples than couples with children. Research has also shown higher response rates for self-administered postal questionnaires compared with self-administration questionnaires handed out to people (e.g. in hospitals) to complete and return.18

Different modes of administration by different sequencing can also produce different response rates. In a survey of hospital in-patient experiences by Harris et al.,19 patients were randomized to receive a postal questionnaire with telephone interview follow-up of non-respondents, or a telephone interview with postal follow-up of non-responders. The authors reported that the telephone first method had a higher response rate, and with higher item response. At an individual study level, the response rates to different modes of questionnaire administration are likely to vary by topic, and in particular for complex issues. There is less information on electronic methods, which also suffer from an inability to cover target populations adequately (see earlier).

Item response rates

Higher item non-response is generally reported in postal surveys compared with face-to-face interviews,20,21 and in postal questionnaires in comparison with telephone interviews.19 Consistent with this, De Leeuw and van der Zouwen’s8 meta-analysis found higher item response in face-to-face interviewers, which they explained by interviewers motivating people to respond, and greater control of the interviewer over the situation (e.g. ensuring that questions are answered and not missed, and recording responses correctly). And while electronic and automated telephone programmes can prevent respondents moving to the next question before they have completed the previous question [by keying in a recognized field (valid number)], they cannot prevent premature termination.

All methods can be prone to premature termination (the respondent does not wish to continue with the interview and terminates it), but this is probably less likely in the presence of a motivating interviewer. Telephone interview surveys, electronic and automated voice activated telephone surveys can be particularly prone to this22 as people become bored with lengthy schedules and discontinue them. However, electronic methods have been reported to have superior item response rates to traditional pen and paper methods. Johnson et al.,6 in a methodological experiment within a British survey on sexual behaviour, reported that while there were no differences in response between computer-assisted self-completion questionnaires (CASI) and pen and paper self-completion questionnaires (PAPI) (both administered within face-to-face interview situations), CASI led to fewer missing item responses than PAPI. Tourangeau et al.’s7 review supports this conclusion that electronic methods had more complete item response rates than the various paper and pencil methods.

Measurement error

The influence of the social setting and bias

While standardized surveys aim to include well-designed and tested questions that have the same meaning to all participants, cultural, social and language differences can all influence interpretations. The actual data collection process also involves an interaction between the questionnaire, the respondent and, in the case of face-to-face and telephone interviews, the interviewer. The nature of this interaction inevitably varies between interview settings and self-administration situations, as well as by individual interviewer. Differences in data by mode of questionnaire administration can therefore be hypothesized, simply by given differences in the structure of the social setting. In theory, these differences can also be compounded by differences in the setting the questionnaire is administered in, although not all investigations find differences in response by setting.23 Tourangeau et al.4 included the pace of the interview, as well as the control over the order of questions,1,2,9,24,25 as potential mediators to the effects of different modes of data collection on data quality (see Box 4).

Box 4.

Pace of interview and control over order of question effects.

1. Computer and self-administration methods possibly slow the pace of the interview down, giving respondents more time to think, thus yielding more accurate responses.
2. Interviews have the advantage that, while administering the questions, interviewers can control the order of the questions (face-to-face or telephone).
3. Electronic and automated telephone methods are usually programmed to prevent the respondent jumping ahead and previewing
questions by not moving on to the next question until the previous questions have been answered, although they sometimes permit respondents to go back to check and correct answers.
4. Self-administered paper questionnaires have no interviewer/programme control over the order of the questions. Respondents can preview the questions and adjust their answers as a result.
1. Computer and self-administration methods possibly slow the pace of the interview down, giving respondents more time to think, thus yielding more accurate responses.
2. Interviews have the advantage that, while administering the questions, interviewers can control the order of the questions (face-to-face or telephone).
3. Electronic and automated telephone methods are usually programmed to prevent the respondent jumping ahead and previewing
questions by not moving on to the next question until the previous questions have been answered, although they sometimes permit respondents to go back to check and correct answers.
4. Self-administered paper questionnaires have no interviewer/programme control over the order of the questions. Respondents can preview the questions and adjust their answers as a result.
Box 4.

Pace of interview and control over order of question effects.

1. Computer and self-administration methods possibly slow the pace of the interview down, giving respondents more time to think, thus yielding more accurate responses.
2. Interviews have the advantage that, while administering the questions, interviewers can control the order of the questions (face-to-face or telephone).
3. Electronic and automated telephone methods are usually programmed to prevent the respondent jumping ahead and previewing
questions by not moving on to the next question until the previous questions have been answered, although they sometimes permit respondents to go back to check and correct answers.
4. Self-administered paper questionnaires have no interviewer/programme control over the order of the questions. Respondents can preview the questions and adjust their answers as a result.
1. Computer and self-administration methods possibly slow the pace of the interview down, giving respondents more time to think, thus yielding more accurate responses.
2. Interviews have the advantage that, while administering the questions, interviewers can control the order of the questions (face-to-face or telephone).
3. Electronic and automated telephone methods are usually programmed to prevent the respondent jumping ahead and previewing
questions by not moving on to the next question until the previous questions have been answered, although they sometimes permit respondents to go back to check and correct answers.
4. Self-administered paper questionnaires have no interviewer/programme control over the order of the questions. Respondents can preview the questions and adjust their answers as a result.

Social desirability bias

Interviews, therefore, involve social interaction with another person, which can lead to respondents taking social norms into account when responding, resulting in social desirability bias (the desire of respondents to present themselves in the best possible light), resulting in the over-reporting of desirable behaviours, and under-reporting of undesirable behaviours (confounding associations between variables by attenuating, inflating or moderating relationships). Methods exist to reduce this problem, including assurances of confidentiality and anonymity, although this can raise respondents’ suspicions about the sensitivity of the topic, and thereby reduce response);26 checking responses against known ‘facts’; indirect questioning; correlation of responses with social desirability measures; and randomized response techniques. With the latter technique, respondents are presented with pairs of questions, one of which is sensitive and one of which is not and they are asked to answer one question within the pair at random (e.g. by tossing a coin); the interviewer cannot see the outcome. In order to infer the response to the socially desirable question, the population distribution of responses to the non-socially desirable question needs to be known from other sources, a truly random procedure needs to be used in the selection of the question, and a large population must be sampled.27

Respondents have been shown to give more positive and socially desirable responses in interview (face-to-face and telephone) surveys than in self-administration (e.g. postal) surveys,7,28,29 even when attempts have been made to take order and contextual effects into account.1,30 Thus estimates of positive health status, health related quality of life, engaging in desirable behaviours and activities, appear likely to be exaggerated when based on face-to-face or telephone interviews, rather than self-administration methods, and socially undesirable behaviours (e.g. smoking) are likely to be under-estimated.1,30–36 Sensitive health problems can also be under-reported in face-to-face or telephone interviews, compared with self-administered questionnaires (e.g. prostatic disease, urinary symptoms).37,38 But some research has reported no differences between interviewer versus self-completion modes and type of response,39,40–44 including a small number of studies in which respondents were randomized between various interview and self-administered questionnaire modes.44–47 Williams et al.48 in a survey of HIV risk behaviour questions used a cross-over design with random assignment of respondents to audio-CASI or face–to-face interview, and to different sequences of these at follow-up. They reported no effect of mode of administration on type of response.

Overall, the literature indicates that there appear to be fewer differences in type of response between the same administration modes, compared to between different modes,8,17,48,49 which has implications for the common research practice of using mixed mode designs (e.g. conducting face-to-face interviews at baseline, with postal or telephone interview at follow-up). Other non-randomized studies have reported no differences in type of responses between telephone and face-to-face interview surveys on voting and alcohol intake.21,50 Furthermore, De Leeuw and van der Zouwen’s8 meta-analysis found fairly small differences between telephone and face-to-face interviews in social desirability bias, amount of information provided, and similarity of response. But, inconsistent with these results, Evans et al.51 alternated the order of face-to-face and telephone interviews among consecutive general practice attenders, and reported that people aged over 60 were more likely to score as anxious or depressed in telephone than face-to-face interviews.

In relation to self-administration modes, Fouladi et al.52 compared responses to self-administered paper questionnaires and self-administered electronic (‘on-line’) questionnaires on emotional functioning, and only small differences in response patterns were detected. Similarly, no differences have been reported in response type between different self-administration modes (email, postal, handing out questionnaires to patients) in surveys of patient satisfaction and assessments of health services.18,53 Again, research is not all consistent,54 which is not unexpected given the predominance of non-experimental designs. Differences in type of responses could be attributable to selection and response bias, or even to real differences between samples.

Acquiescence bias

Any excess of positive responses in interview, compared to self-administration situations, could also be due to increased ‘yes-saying’ or acquiescence bias: a culturally based tendency to agree with others because it is perceived to be ‘easier’ to agree than disagree. Although ‘yes-saying’ can also be evident on self-administered questionnaires, it appears to be less pronounced than in interviews. The potential for a different type of reporting bias due to ‘ease’ exists in self-administered questionnaires, based on evidence from classic psychological experiments. These respondents tend to check the nearest response choices to the question. Commonly used attempts to control for this include switching the order of responses periodically in a measurement scale (e.g. from ‘Strongly agree – Strongly disagree’ to ‘Strongly disagree – Strongly agree’.9

While classic experiments have supported evidence of a cultural tendency towards ‘yes-saying’, research has yielded inconsistent results in relation to differences between mode of questionnaire administration and ‘yes-saying’. Nicholaas et al.,21 again in their experiment of telephone and face-to-face interview methods, found little evidence of acquiescence bias operating more with one mode of administration than another. A meta-analysis on this issue by De Leeuw55 failed to detect any differences between postal, face-to-face and telephone interviews, although De Leeuw and van der Zouwen’s8 earlier meta-analysis found some evidence of more acquiescence (agreement) bias, more evasiveness (‘don’t know’ replies or no reply), and more extreme responses in telephone interviews compared with face-to-face interviews.

Interviewer bias

The presence of an interviewer can be distracting to respondents. If an excess of positive or socially desirable responses in interview situations is found, this could be due to interviewer bias (due to characteristics of the interviewer,9,24 or because people may be reluctant to reveal beliefs unlikely to be endorsed by the interviewer; see earlier, social desirability bias). In addition, interviewers can vary in their ability to appear or sound neutral, to listen, to probe adequately, to use techniques to aid recall and to record responses. Careful training and monitoring of interviewers can minimize this, and analysis of responses by interviewer (where more than one is used) can check for interviewer bias. Self-administration modes obviously avoid this source of bias.

Response-choice order: primacy and recency effects

Bias from the order of the response choices, by mode of administration, have also been reported. Ideally respondents listen to the question during interviews, keep all response options in mind, consider them and decide which is most important/applicable to themselves. This is a demanding task, especially when under time pressures, and in telephone interview situations where there can be no visual prompting (question-answer sequences rarely exceed 1 minute).56

Research has indicated that when questions are presented visually (as in self-administered questionnaires) respondents are likely to begin with the first response option presented (primacy effects). This is because in situations where respondents can consider response choices at their own pace, they may chose an early response alternative without much thought if it is agreeable, and then move on to the next question. In contrast, when questions are presented orally (as in face-to-face or telephone interviews) respondents tend to begin processing the final response option offered (while they still recall it) and, where agreeable, they select that option (recency effects). This tendency leads to response order effects.28,57–59 There is also some evidence that the selection of more extreme response choices (i.e. ‘very’ as opposed to ‘fairly’ satisfied), is more common in telephone than face-to-face interviews.21,59

Although the evidence on primacy and recency effects is mixed,16 a meta-analysis of response order experiments in large scale surveys showed that recency effects were most pronounced among people aged 65 and over,56,61 possibly due to cognitive changes that can occur with normal ageing.62 Other research has also reported that older and younger people are affected differently by features of the research instrument.63

Recall effects

Respondents need to recall the information in order to respond to a question, and this again can vary by mode of administration. The self-administered paper questionnaire is visual and the interview setting is aural, immediately resulting in different cognitive processes operating. In interviews, an interviewer can re-direct respondents back to the topic of relevance if they stray off it, probe to elicit relevant information, and utilize a range of techniques to prompt memory,9 but in self-administration settings this is not possible, although respondents themselves can consult diaries or other information sources to aid recall. In self-administration settings the respondent alone has to judge whether the information they have recalled is relevant to the question, and how best to respond. However, research is inconsistent for respondent recall by mode of administration mode,64,65 partly reflecting bias from non-experimental designs.65

Length of verbal response

Telephone interviews tend to be shorter, the mean number of communication acts is shorter, and the length of utterances is shorter than with face-to-face interviews.8 Telephone methods have also been shown to yield more truncated responses, or no responses at all, to open ended questions than face-to-face interviews.17

De Leeuw and van der Zouwen’s8 meta-analysis confirmed that the amount of information given by respondents to open questions and checklists was greater in face-to-face than telephone interviews. This might be due to the need for feedback and pressure to maintain verbal flow in telephone conversations, resulting in respondents answering questions more quickly and interviewers allowing less time between questions. More sentences, of greater length, may also be more acceptable in the face-to-face interview situation.8

Sensitive information

Self-administration of questionnaires can increase respondents’ willingness to disclose sensitive information, compared with face-to-face or telephone interviews. The greater anonymity offered in postal survey, with its weak social presence, for example, has been reported to lead to high item response, and more accurate reporting on sensitive topics such as health and behaviour.4,66–68 There are, of course, established techniques for eliciting sensitive information and for checking for bias, which overlap with the methods of minimizing social desirability bias (see earlier). It is preferable to minimize the potential for such bias at the outset. Sensitive questions are best asked by more impersonal, self-administration methods as they lead to higher levels of reporting.69 The highest levels of reporting are for audio-computer self-administration questionnaires and computer-assisted self-completion interviews (which allow respondents to key their responses to questions directly into an interviewer’s laptop computer, permitting more confidentiality).4,6,28,70,71

The literature is inconclusive about differences within modes (e.g. interview modes, such as telephone and face-to-face interview methods) and the validity of sensitive information. Some give evidence that telephone methods are more successful at eliciting frank responses about sensitive behaviour than face-to-face interviews.17,60 Others have reported that respondents are more willing to answer questions about race and income72 or illicit drug use73 in personal face-to-face rather than in conventional telephone interviews.72 Smith74 and Tourangeau et al.’s4 reviews on the topic produced conflicting results.

Respondent preferences

While the biasing effects of mode of preference are unknown, studies examining respondents’ preferences report that people prefer face-to-face interviews to telephone interviews,21,54 and electronic self-completion questionnaires to paper self-completion questionnaires.75 Electronic self-completion modes, within face-to-face interview situations (i.e. on the interviewer’s lap top computer), have also been reported to be acceptable to people aged 65 and over.76 (And see Social desirability bias earlier).

Discussion

This paper was based on a narrative review of systematic and non-systematic searches of the literature on the effects of mode of questionnaire administration on data quality. The review showed that, while some studies were inconsistent or inconclusive, different modes of questionnaire administration are likely to affect the quality of the data collected. The effects appeared to be more marked between interview and self-administration modes, rather than within modes. It was often difficult to isolate the effects of the method of contact from the other differences between the data collection methods, and this limits knowledge about how the mode of administration alters the process of answering questions.4 A main problem with the literature elicited is that most of the studies did not use experimental or randomization methods to allocate the different questionnaire modes to participants. Thus, differences detected in responses between different modes could be due to differences between settings, or to genuine differences between respondents.

Explanatory models that have been proposed for the effects of data collection mode on data quality include the impersonality of the method of contacting respondents and in delivering and administering the questionnaire (highest in self-administration methods), the cognitive burden imposed on respondents by the method (greatest in self-administration methods), the legitimacy of the study (it is more difficult to establish the credentials of some surveys in telephone contacts), the control over the questionnaire (interviewers have the highest level of control over the order and completion of the questions), the rapport between respondent and interviewer (lowest in self-administration settings where there is no visual contact), and communication style (an interviewer can be motivating and clarify questions, but can lead to interviewer and social desirability bias)4,8 (see Box 5). These models need to be fully tested in experimental designs.

Box 5.

Explanations for effects of data collection mode on data quality.

1. The impersonality of the method: while an interviewer can enhance motivation to respond as well as response accuracy, self-administration methods increase perceived impersonality and may encourage reporting of some sensitive information (e.g. in interview situations there may be fear of embarrassment with the exposure of weakness, failure or deviancy in the presence of a stranger).
2. The cognitive burden imposed by the method: different methods make different demands on respondents, including reading, listening, following instructions, recognising numbers and keying in responses. Face-to-face interviews make the least demands, while the lack of visual support in telephone interviews may make the task more complex.
3. The legitimacy of the study: this may be more difficult to establish with some methods than others. In contrast to paper or electronic communications, telephone contacts limit the possibilities for establishing the survey’s credentials. This might affect initial response and the importance respondents place on the study, and their motivation to answer questions accurately.
4. The control over the questionnaire varies: interviewers have the highest level of control over question order; in self-administered paper questionnaire modes there is little control over question order.
5. Rapport: rapport between respondent and interviewer may be more difficult to establish in self-administration and telephone interview than in face-to-face modes, as there is no visual contact. This can adversely affect motivation to respond, although social desirability bias may be reduced as there is less need for approval.
6. Communication style: more information may be obtained in interview than other situations, as interviewers can motivate respondents, pause to encourage (more, longer) responses, and clarify questions; interviewers can also lead to interviewer and social desirability bias.
1. The impersonality of the method: while an interviewer can enhance motivation to respond as well as response accuracy, self-administration methods increase perceived impersonality and may encourage reporting of some sensitive information (e.g. in interview situations there may be fear of embarrassment with the exposure of weakness, failure or deviancy in the presence of a stranger).
2. The cognitive burden imposed by the method: different methods make different demands on respondents, including reading, listening, following instructions, recognising numbers and keying in responses. Face-to-face interviews make the least demands, while the lack of visual support in telephone interviews may make the task more complex.
3. The legitimacy of the study: this may be more difficult to establish with some methods than others. In contrast to paper or electronic communications, telephone contacts limit the possibilities for establishing the survey’s credentials. This might affect initial response and the importance respondents place on the study, and their motivation to answer questions accurately.
4. The control over the questionnaire varies: interviewers have the highest level of control over question order; in self-administered paper questionnaire modes there is little control over question order.
5. Rapport: rapport between respondent and interviewer may be more difficult to establish in self-administration and telephone interview than in face-to-face modes, as there is no visual contact. This can adversely affect motivation to respond, although social desirability bias may be reduced as there is less need for approval.
6. Communication style: more information may be obtained in interview than other situations, as interviewers can motivate respondents, pause to encourage (more, longer) responses, and clarify questions; interviewers can also lead to interviewer and social desirability bias.
Box 5.

Explanations for effects of data collection mode on data quality.

1. The impersonality of the method: while an interviewer can enhance motivation to respond as well as response accuracy, self-administration methods increase perceived impersonality and may encourage reporting of some sensitive information (e.g. in interview situations there may be fear of embarrassment with the exposure of weakness, failure or deviancy in the presence of a stranger).
2. The cognitive burden imposed by the method: different methods make different demands on respondents, including reading, listening, following instructions, recognising numbers and keying in responses. Face-to-face interviews make the least demands, while the lack of visual support in telephone interviews may make the task more complex.
3. The legitimacy of the study: this may be more difficult to establish with some methods than others. In contrast to paper or electronic communications, telephone contacts limit the possibilities for establishing the survey’s credentials. This might affect initial response and the importance respondents place on the study, and their motivation to answer questions accurately.
4. The control over the questionnaire varies: interviewers have the highest level of control over question order; in self-administered paper questionnaire modes there is little control over question order.
5. Rapport: rapport between respondent and interviewer may be more difficult to establish in self-administration and telephone interview than in face-to-face modes, as there is no visual contact. This can adversely affect motivation to respond, although social desirability bias may be reduced as there is less need for approval.
6. Communication style: more information may be obtained in interview than other situations, as interviewers can motivate respondents, pause to encourage (more, longer) responses, and clarify questions; interviewers can also lead to interviewer and social desirability bias.
1. The impersonality of the method: while an interviewer can enhance motivation to respond as well as response accuracy, self-administration methods increase perceived impersonality and may encourage reporting of some sensitive information (e.g. in interview situations there may be fear of embarrassment with the exposure of weakness, failure or deviancy in the presence of a stranger).
2. The cognitive burden imposed by the method: different methods make different demands on respondents, including reading, listening, following instructions, recognising numbers and keying in responses. Face-to-face interviews make the least demands, while the lack of visual support in telephone interviews may make the task more complex.
3. The legitimacy of the study: this may be more difficult to establish with some methods than others. In contrast to paper or electronic communications, telephone contacts limit the possibilities for establishing the survey’s credentials. This might affect initial response and the importance respondents place on the study, and their motivation to answer questions accurately.
4. The control over the questionnaire varies: interviewers have the highest level of control over question order; in self-administered paper questionnaire modes there is little control over question order.
5. Rapport: rapport between respondent and interviewer may be more difficult to establish in self-administration and telephone interview than in face-to-face modes, as there is no visual contact. This can adversely affect motivation to respond, although social desirability bias may be reduced as there is less need for approval.
6. Communication style: more information may be obtained in interview than other situations, as interviewers can motivate respondents, pause to encourage (more, longer) responses, and clarify questions; interviewers can also lead to interviewer and social desirability bias.

This topic has important implications for research methodology, the validity of the results of research, the soundness of evidence-based public policy, and for clinicians who wish to screen their patients using questionnaires.51 All users of questionnaires need to be aware of the potential effects of mode of administration on their data. The validity of the common research practice of comparing data from dual modes of administration within studies is also called into question. While calls have been made for greater attention to questionnaire development in epidemiology,77 there has been less focus on the wide range of different biases, at different levels, stemming from the various modes of administering questionnaires.

References

1

Bowling
A
, Bond M, Jenkinson C, Lamping D. Short Form-36 (SF-36) Health Survey Questionnaire: Which normative data should be used? Comparisons between the norms provided by the Omnibus Survey in Britain, The Health Survey for England and the Oxford Health and Lifestyle Survey.
J Pub Health Med
1999
;
21
:
255
–270.

2

Schuman
H
, Presser S. Questions and answers in attitude surveys. New York: Academic Press, 1981.

3

Bajekal
M
, Harries T, Breman R, Woodfield K.
Review of disability estimates and definitions
. A study carried out on behalf of the Department for Work and Pensions, in-house report no. 128. London: Department of Work and Pensions, 2004.

4

Tourangeau
R
, Rips LJ, Rasinski K.
The psychology of survey response
. Chapter 10: Mode of data collection. Cambridge: Cambridge University Press,
2000
;
289
–312.

5

Tourangeau
R.
Cognitive sciences and survey methods. In: Jabine T, Straf M, Tanur J, Tourangeau R, eds.
Cognitive aspects of survey methodology: building a bridge between disciplines
. Washington DC: National Academy Press, 1984.

6

Johnson
AM
, Copas AJ, Erens B, et al. Effect of computer-assisted self-interviews on reporting of sexual HIV risk behaviours in a general population sample: a methodological experiment.
AIDS
2001
;
15
:
111
–115.

7

Tourangeau
R
, Rasinski K, Jobe JB, et al. Sources of error in a survey of sexual behaviour.
J Official Stats
1997
;
13
:
341
–365.

8

De Leeuw
ED
, van der Zouwen J. Data quality in telephone and face-to-face surveys: a comparative meta-analysis. In: Groves RM, Biemer PP, Lyberg LE et al. eds.
Telephone survey methodology
. New York: John Wiley and Sons, 1988.

9

Bowling
A.
Research methods in health. Investigating health and health services
. Buckinghamshire: Open University Press, 2001.

10

Roberts
P
, Roberts I, DiGuiseppi C, et al.
Methods to influence response to postal questionnaires (Cochrane Methodology Group)
. Cochrane Library; 1. Chichester: John Wiley & Sons Ltd., 2004.

11

Cartwright
A.
Health surveys in practice and in potential. London: King’s Fund, 1983.

12

McColl
E
, Jacoby A, Thomas L, et al. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients.
Health Tech Assess
2001
;
5
.

13

Scott
C.
Research on mail surveys
. Social Survey Papers. Methodological Series no. 100. London: Office for Population Censuses and Surveys, 1961.

14

Austin
P
, Lewis D, Scammell B.
A review of postal surveys
. OPCS Methodology Series Paper. London: Office for Population Censuses and Surveys, 1977.

15

Walker
A
, Maher J, Coulthard M, et al.
Living in Britain. Results from the 2000 General Household Survey
. London: The Stationary Office, 2001.

16

Dillman
DA
.
Mail and internet surveys. The tailored design method
, 2nd edn. New York: John Wiley and Sons Inc, 2000.

17

Sykes
W
, Collins M. Effect of mode of interview: experiments in the UK. In: Groves RM, Biemer PP, Lyberg LE et al., eds.
Telephone survey methodology
. New York: John Wiley and Sons, 1988.

18

Gasquet
I
, Falissard B, Ravaud P. Impact of reminders and method of questionnaire distribution on patient response to mail-back satisfaction survey.
J Clin Epidemiol
2001
;
54
:
1174
–1180.

19

Harris
LE
, Weinberger M, Tierney WM. Assessing inner-city patients’ hospital experiences. A controlled trial of telephone interviews versus mailed surveys.
Med Care
1997
;
35
:
70
–76.

20

Brazier
JE
, Harper R, Jones N, et al. Validating the SF-36 health survey questionnaire: new outcome measure for primary care.
Br Med J
1992
;
305
:
160
–164.

21

Nicholaas
G
, Thomson K, Lynn P.
The feasibility of conducting electoral surveys in the UK by telephone
. Centre for Research into Elections and Social Trends. London: National Centre for Social Research, and Department of Sociology, University of Oxford, 2000.

22

Frankfort-Nachmias
C
, Nachmias D.
Research methods in the social sciences
, 4th edn. London: Edward Arnold, 1992.

23

Glaser
AW
, Davies K, Walker D, Brazier D. Influence of proxy respondents and mode of administration on health status assessment following central nervous system tumors in childhood.
Quality Life Res
1997
;
6
:
43
–53.

24

Sudman
S
, Bradburn NM.
Asking questions
. San Francisco: Jossey Bass, 1983.

25

Schwarz
N
, Hippler HJ. Subsequent questions may influence answers to preceding questions in mail surveys.
Pub Opin Quart
1995
;
59
:
93
–97.

26

Singer
E
, Hippler E, Schwarz N. Confidentiality assurances in surveys: reassurance or threat?
Int J Pub Opin Quart
1992
;
4
:
256
–268.

27

Warner
SL
. Randomised response: a survey technique for eliminating evasive answer bias.
J Am Stat Assoc
1965
;
60
:
63
–69.

28

Tourangeau
R
, Smith TW. Asking sensitive questions: the impact of data collection mode, question format, and question context.
Pub Opin Quart
1996
;
60
:
275
–304.

29

Presser
S
, Stinson L. Data collection mode and social desirability bias in self-reported religious attendance.
Am Sociol Rev
1998
;
63
:
137
–145.

30

Lyons
RA
, Wareham K, Lucas M, et al. SF-36 scores vary by method of administration: implications for study design.
J Pub Health Med
1999
;
21
:
41
–45.

31

Vuillemin
A
, Oppert JM, Guillemin F, et al. Self-administered questionnaire compared with interview to past-year physical activity.
Med Sci Sports Exercise
2000
;
32
:
1119
–1124.

32

Tomlin
ME
, Pinney S, Buncher CR, et al. The effect of the mode of questionnaire administration on workers’ responses to cigarette smoking questions.
Am J Epidemiol
1998
;
147
(Suppl): 338.

33

Brambilla
DJ
, McKinlay SM. A comparison of responses to mailed questionnaires and telephone interviews in a mixed mode health survey.
Am J Epidemiol
1987
;
126
:
962
–971.

34

McHorney
CA
, Kosinski M, Ware JE. Comparisons of the costs and quality of norms for the SF-36 Health Survey collected by mail versus telephone interview: results from a national survey.
Med Care
1994
;
32
:
551
–567.

35

Perkins
JJ
, Sanson-Fisher RW. An examination of self- and telephone- administered modes of administration for the Australian SF-36.
J Clin Epidemiol
1998
;
51
:
969
–973.

36

Weinberger
M
, Oddone EZ, Samsa GP, Landsman PB. Are health-related quality of life measures affected by mode of administration?
J Clin Epidemiol
1996
;
49
:
135
–140.

37

Garcia-Losa
M
, Unda M, Badia X, et al. Effect of mode of administration on I-PSS scores in a large BPH population.
Euro Urol
2001
;
40
:
451
–457.

38

Rhodes
T
, Girman CJ, Jacobsen SJ, et al. Does the mode of questionnaire administration affect the reporting of urinary symptoms?
Urology
1995
;
46
:
341
–345.

39

Klepac
RK
, Dowling J, Rokke P, et al. Interview vs. paper-and-pencil administration of the McGill Pain Questionnaire.
Pain
1981
;
11
:
241
–246.

40

Fowler
FJ
, Gallagher PM, Nederend S. Comparing telephone and mail responses to the CAHPS survey instrument. Consumer Assessment of Health Plans Study.
Med Care
1999
;
37
: MS41–49.

41

Cam
K
, Akman Y, Cicekci B, et al. Mode of administration of international prostate symptoms score in patients with lower urinary tract symptoms: physician vs self-administration.
Prostate Cancer Prostatic Dis
2004
;
7
:
41
–44.

42

Bozlu
M
, Doruk E, Akbay E, et al. Effect of administration mode (patient vs physician) and patient educational level on the Turkish version of the International Symptom Score.
Int J Urol
2002
;
9
:
417
–421.

43

Durant
LE
, Carey MP. Self-administered questionnaires versus face-to-face interviews assessing sexual behavior in young women.
Arch Sex Behav
2000
;
29
:
309
–322.

44

Kaplan
CP
, Hilton JF, Park-Tanjasiri S, Perez-Stable EJ. The effect of data collection mode on smoking attitudes and behavior in young African American and Latina women. Face-to-face interview versus self-administered questionnaires.
Eval Rev
2001
;
25
:
454
–473.

45

Wu
AW
, Jacobson DL, Berzon RA, et al. The effect of mode of administration on medical outcomes study health ratings and EuroQol scores in AIDS.
Qual Life Res
1997
;
6
:
3
–10.

46

Bellamy
N
, Campbell J, Hill J, Band P. A comparative study of telephone versus onsite completion of WOMAC 3.0 osteoarthritis index.
J Rheumatol
2002
;
29
:
783
–786.

47

Chambers
LW
, Haight M, Norman G, MacDonald L. Sensitivity to change and the effect of mode of administration on health status administration.
Med Care
1987
;
25
:
470
–480.

48

Williams
ML
, Freeman RC, Bowen AM, et al. A comparison of the reliability of self-reported drug use and behaviors using computer-assisted versus face-to-face interviews.
AIDS Ed Prevent
2000
;
12
:
199
–213.

49

Nebot
M
, Celentano DD, Burwell L, et al. AIDS and behavioural risk factors in women in inner city Baltimore: a comparison of telephone and face to face surveys.
J Epidemiol Commun Hlth
1994
;
48
:
412
–418.

50

Midanik
LT
, Greenfield TK. Telephone versus in-person interviews for alcohol use: results from the 2000 National Alcohol Survey.
Drug Alcohol Depend
2003
;
72
:
209
–214.

51

Evans
M
, Kessler D, Lewis G, Peters TJ, Sharp D. Assessing mental health in primary care using standardized scales: can it be carried out over the telephone?
Psychol Med
2004
;
34
:
157
–162.

52

Fouladi
RT
, McCarthy CJ, Moller NP. Paper-and-pencil or online? Evaluating mode effects on measures of emotional functioning and attachment.
Assessment
2002
;
9
:
204
–215.

53

Harewood
GC
, Yacavone RF, Locke GR, Wiersema MJ. Prospective comparison of endoscopy patient satisfaction survey: e-mail versus standard mail versus telephone.
Am J Gastroenterol
2001
;
96
:
3312
–3317.

54

Bower
P
, Roland MO. Bias in patient assessments of general practice: general practice assessment survey scores in surgery and postal responders.
Br J Gen Pract
2003
;
53
:
126
–128.

55

de Leeuw
ED
.
Data quality in mail, telephone and face-to-face surveys
. Amsterdam: TT Publications, 1992.

56

KnŠuper
B
, Schwarz N. Why your research may be out of order.
The Psychologist
2004
;
17
:
28
–31.

57

Krosnick
JA
, Alwin D. An evaluation of a cognitive theory of response-order effects in survey measurement.
Pub Opin Quart
1987
;
51
:
201
–219.

58

Schwarz
N
, Knauper B, Hippler HJ, et al. Rating scales: numeric values may change the meaning of scale labels.
Pub Opin Quart
1991
;
55
:
618
–630.

59

Sudman
S
, Bradburn NM, Schwarz N. Thinking about answers: the application of cognitive processes to survey methodology. San Francisco: Jossey-Bass, 1996.

60

Dooley
D.
Social research methods
. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

61

KnŠuper
B.
The impact of age and education on response order effects in attitude measurement.
Pub Opin Quart
1999
;
63
:
347
–370.

62

Park
DC
. The basic mechanisms accounting for age-related decline in cognitive function. In: Park DC, Schwarz N, eds.
Cognitive aging: a primer
. Philadelphia: Psychology Press, 2000.

63

Schwarz
N
, Park D, KnŠuper B, Sudman S (eds).
Cognition, aging, and self-reports
. Philadelphia: Psychology Press, 1999.

64

Dorant
E
, van den Brandt PA, Goldbohm, RA, Sturmans F. Agreement between interview data and a self-administered questionnaire on dietary supplement use.
Eur J Clin Nutr
1994
;
48
:
180
–188.

65

Brambilla
DJ
, McKinlay SM. A comparison of responses to mailed questionnaires and telephone interviews in a mixed mode health survey.
Am J Epidemiol
1987
;
126
:
962
–971.

66

Pruchno
RA
, Hayden JM. Interview modality: effects of costs and data quality in a sample of older women.
J Ageing Hlth
2000
;
12
:
3
–24.

67

Parker
C
, Dewey M. Assessing research outcomes by postal questionnaire with telephone follow-up. TOTAL Study Group. Trial of occupation therapy and leisure.
Int J Epidemiol
2000
;
29
:
1065
–1069.

68

Siemiatycki
J.
A comparison of mail, telephone and home interview strategies for household health surveys.
Am J Pub Hlth
1979
;
69
:
238
–245.

69

Bradburn
NM
. Response effects. In: Rossi P, Wright J, Anderson A, eds.
Handbook of survey research
. New York: Academic Press, 1983.

70

Weeks
MF
. Computer assisted survey information collection: a review of CASI methods and their implications for survey operations.
J Off Stat
1992
;
8
:
445
–465.

71

Turner
CF
, Ku L, Rogers SM, et al. Adolescent sexual behaviour, drug use and violence: increased reporting with computer survey technology.
Science
1998
;
280
:
867
–873.

72

Groves
RM
, Kahn R. Surveys by telephone: a national comparison with personal interviews. New York: Academic Press, 1979.

73

Johnson
T
, Houghland J, Clayton R. Obtaining reports of sensitive behaviors: a comparison of substance use reports from telephone and face-to-face interviews.
Soc Sci Quart
1989
;
70
:
174
–183.

74

Smith
TW
.
A comparison of telephone and personal interviewing
. Government Social Survey methodological report no. 28. Chicago: National Opinion Research Center, 1984.

75

Ryan
JM
, Corry JR, Attewell R, Smithson MJ. A comparison of the electronic version of the SF-36 General Health Questionnaire to the standard paper version.
Qual Life Res
2002
;
11
:
19
–26.

76

Bowling
A
, Bannister D, Sutton S, et al. A multidimensional model of QoL in older age.
Age Mental Hlth
2002
;
6
:
355
–371.

77

Olsen
J
, on behalf of the IEA European Questionnaire Group. Epidemiology deserves better questionnaires.
Int J Epidemiol
1998
;
27
:
935.