4.4 Constructing Questions
Two
basic considerations apply to the construction of good survey
questions: (1) The questions must clearly and unambiguously
convey the desired information to the respondent, and (2) the
questions should be worded to allow accurate transmission of
respondents' answers to researchers.
Questionnaire design depends on choice of data collection
technique.
Questions written for a mail survey must be easy to read and
understand, since respondents are unable to obtain explanations.
Telephone surveys cannot use questions with long lists of
response options; the respondent may forget the first few
responses by the time the last ones have been read. Questions
written for group administration must be concise and easy for
the respondents to answer. In a personal interview the
interviewer must tread lightly with sensitive and personal
questions, which his or her physical presence might make the
respondent less willing to answer. (These procedures are
discussed in greater detail later in this chapter.)
The design of a questionnaire must always reflect the basic
purpose of the research.
A complex research topic such as media use during a political
campaign requires more detailed questions than does a survey to
determine a favorite radio station or magazine. Nonetheless,
there are several general guidelines to follow regarding wording
of questions and question order and length.
4.4.1 Types of Questions
Surveys
can consist of two basic types of questions, open-ended and
closed-ended. An open-ended question requires respondents to
generate their own answers.
For example:
What do you like most about your local newspaper?
What type of television program do you prefer? What are the
three most important problems in your community?
Open-ended questions allow respondents freedom in answering
questions and the chance to provide in-depth responses.
Furthermore, they give researchers the opportunity to ask: "Why
did you give that particular answer?" or "Could you explain your
answer in more detail?" This flexibility to follow up on, or
probe, certain questions enables the interviewers to gather
information about the respondents' feelings and the motives
behind their answers.
Also, open-ended questions allow for answers that researchers
did not foresee in the construction of the questionnaire—answers
that may suggest possible relationships with other answers or
variables.
For example, in response to the question, "What types of
programs would you like to hear on radio?" the manager of a
local radio station might expect to hear "news" and "weather" or
"sports." However, a subject may give an unexpected response,
such as "obituaries" (Fletcher & Wimmer, 1981). This will force
the manager to reconsider his perceptions of some of the local
radio listeners.
Finally, open-ended questions are particularly useful in a pilot
version of a study.
Researchers may not know what types of responses to expect from
subjects, so open-ended questions are used to allow subjects to
answer in any way they wish. From the list of responses provided
by the subjects, the researcher then selects the most-often
mentioned items and includes them in multiple-choice or
forced-choice questions. Using open-ended questions in a pilot
study generally saves time and resources, since all possible
responses are more likely to be included on the final
measurement instrument; there would be no reason to reconduct
the analysis for failure to include an adequate number of
responses or response items.
The major disadvantage associated with open-ended questions is
the amount of time needed to collect and analyze the responses.
Open-ended responses required interviewers to spend a lot of
time writing down or typing answers. In addition, because there
are so many types of responses, a content analysis (Chapter 8)
of each open-ended question must be completed to produce data
that can be tabulated.
A
content analysis groups common responses into categories,
essentially making the question closed-ended. The content
analysis results are then used to produce a codebook to code the
open-ended responses. A codebook is essentially a menu or list
of quantified responses. For example, "I hate television" may be
coded as a 5 for input into the computer.
In the case of closed-ended questions, respondents select an
answer from a list provided by the researcher. These questions
are popular because they provide greater uniformity of response,
and because the answers are easily quantified. The major
disadvantage is that researchers often fail to include some
important responses.
Respondents may have an answer different from those that are
supplied. One way to solve the problem
is to include an "other" response followed by a blank space, to
give respondents an opportunity to supply their own answer. The
"other" responses are then handled just like an open-ended
question—a content analysis of the responses is completed to
develop a codebook. A pilot study or pretest of a questionnaire
often solves most problems with closed-ended questions.
4.4.2 Problems in Interpreting Open-Ended
Questions
Open-ended
questions often provide a great deal of frustration.
In many cases, respondents' answers are bizarre.
Sometimes respondents don't understand a
question and provide answers that are not relevant. Sometimes
interviewers have difficulty understanding respondents, or they
may have problems with spelling what the respondents say. In
these cases, researchers must interpret the answer and determine
which code is appropriate.
The
following examples are actual verbatim comments from telephone
surveys conducted by Paragon Research in Denver, Colorado. They
show that even the most well-planned survey questionnaire can
produce a wide range of responses. The survey question asked:
"How do you describe the programming on your favorite radio
station?" Some responses were:
-
The station is OK, but it's
geared to Jerry Atrics.
-
I only listen to the
station because my poodle likes it.
-
The music is good, but
sometimes it's too Tiny Booper.
-
It's great. It has the best
floor mat in the city.
-
The station is good, but
sometimes it makes me want to vomit.
-
It's my favorite, but I
really don't like it since my mother does.
-
My parrot is just learning
to talk, and the station teaches him a lot of words.
-
My kids hate it, so I turn
it up real loud.
-
It sounds great with my car
trunk open.
-
My boyfriend forces me to
listen.
4.4.3 General Guidelines
Before
examining whether specific question types are appropriate for
survey research, some general do's and don'ts about writing
questions are in order.
1. Make questions clear:
This should go without saying, but many researchers
become so closely associated with a problem that they can no
longer put themselves in the respondents' position. What might
be perfectly clear to researchers might not be nearly as clear
to persons answering the question. For example, "What do you
think of our company's rebate program?" might seem to be a
perfectly sensible question to a researcher, but to respondents
it might mean, "Is the monetary amount of the rebate too small?"
"Is the rebate given on the wrong items?" "Does it take too long
for the rebate to be paid?" or "Have the details of the program
been poorly explained?" Questionnaire items must be phrased
precisely so that respondents know what is being asked.
Making questions clear also requires avoiding difficult or
specialized words, acronyms, and stilted language.
In general, the level of vocabulary commonly found in newspapers
or popular magazines is adequate for a survey. Questions should
be phrased in everyday speech, and social science jargon,
whereas, technical words should be eliminated.
The clarity of a questionnaire item can be affected by double or
hidden meanings in the words that are not apparent to
investigators.
For example, the question, "How many television shows do you
think are a little too violent-most, some, few, or none?"
contains such a problem. Some respondents who feel that all TV
shows are extremely violent will answer "none" on the basis of
the question's wording. These subjects reason that all shows are
more than "a little too violent"; therefore, the most
appropriate answer to the question is "none." Deleting the
phrase "a little" from the question helps avoid this pitfall. In
addition, the question inadvertently establishes the idea that
at least some shows are violent. The question should
read, "How many television shows, if any, do you think
are too violent—most, some, few, or none?" Questions should be
written so they are fair to all types of respondents.
2. Keep questions short:
To be precise and unambiguous, researchers sometimes write
long and complicated items. However, respondents who are in
a hurry to complete a questionnaire are unlikely to take the
time to study the precise intent of the person who drafted the
items. Short, concise items that will not be misunderstood are
best.
3. Remember the purposes of the research:
It is important to include in a questionnaire only items that
directly relate to what is being studied.
For example, if the occupational level of the respondents is not
relevant to the hypothesis, the questionnaire should not ask
about it. Beginning researchers often add questions merely for
the sake of developing a longer questionnaire. Keep in mind that
parsimony in questionnaires is a paramount consideration.
4. Do not ask double-barreled questions:
A double-barreled question is one that actually asks two or
more questions. Whenever the word and appears in a
question, the sentence structure should be examined to see
whether more than one question is being asked. For example,
"This product is mild on hands and gets out stubborn stains. Do
you agree - or disagree?" Since a product that gets out stubborn
stains might at the same time be highly irritating to the skin,
a respondent could agree with the second part of the question
while disagreeing with the first part. This question should be
divided into two items.
5. Avoid biased words or terms:
Consider the following item: "In your free time, would you
rather read a book or just watch television?" The word just in
this example injects a pro-book bias into the question because
it implies that there is something less than desirable about
watching television. In like manner, "Where did you hear the
news about the president's new program?" is mildly biased
against newspapers; the word here suggests that "radio,"
"television," or "other people" is amore appropriate answer.
Questionnaire items that start off with "Do you agree or
disagree with so-and-so's proposal to . . ." almost always bias
a question. If the name "Adolph Hitler" is inserted for
"so-and-so," the item becomes overwhelmingly negative. By
inserting "the President," a potential for both positive and
negative bias is created. Any time a
specific person or source is mentioned in a question, the
possibility of introducing bias arises.
6. Avoid leading questions:
A leading question is one that suggests a certain response
(either literally or by implication) or contains a hidden
premise. For example, "Like most Americans, do you read a
newspaper every day?" suggests that the respondent should answer
in the affirmative or run the risk of being unlike most
Americans. The question "Do you still use marijuana?" contains a
hidden premise. This type of question is usually referred to
as a double bind: regardless of how the respondent
answers, an affirmative response to the hidden premise is
implied — in this case, he or she has used marijuana at some
point.
7. Do not use questions that ask for highly
detailed information.
The question "In the past 30 days, how many hours of television
have you viewed with your family?" is unrealistic. Few
respondents could answer such a question. A more realistic
approach would be to ask, "How many hours did you spend watching
television with your family yesterday?" A researcher interested
in a 30-day period should ask respondents to keep a log or diary
of family viewing habits.
8. Avoid potentially embarrassing questions
unless absolutely necessary:
Most surveys need to collect data of a confidential or
personal nature, but an overly personal question may cause
embarrassment and inhibit respondents from answering honestly.
Two common areas with high potential for embarrassment are age
and income. Many individuals are reluctant to tell their
exact ages to strangers doing a survey. Instead of asking
directly how old a respondent is, it is better to allow some
degree of confidentiality by asking, "Now, about your age — are
you in your 20s, 30s, 40s, 50s, 60s, . . . ?" Most respondents
are willing to state what decade they fall in, and this
information is usually adequate for statistical purposes.
Interviewers might also say, "I'm going to read several age
categories to you. Please stop me when I reach the category
you're in."
Income may be handled in a similar manner. A straightforward,
"What is your annual income?" often prompts the reply, "None of
your business." It is more prudent to preface a reading of the
following list with the question "Which of these categories
includes your total annual Income"
-
More than $30,000
-
$15,000-$29,999
-
$8,000-$14,999
-
$4,000-$7,999
-
$2,000-$3,999
-
Under $2,000
These categories are broad enough to allow respondents some
privacy but narrow enough for statistical analysis.
Moreover, the bottom category, "Under $2,000," was made
artificially low so that individuals who fall into the
$2,000-$3,999 slot would not have to be embarrassed by giving
the very lowest choice. The income classifications depend on the
purpose of the questionnaire and the geographic and demographic
distribution of the subjects. The $30,000 upper level in the
example would be much too low in several parts of the country.
Other potentially sensitive areas include people's sex lives,
drug use, religion, business practices, and trustworthiness.
In all these areas, care should be taken to ensure respondents
of confidentiality and even anonymity, when possible.
The simplest type of closed-ended question is one that provides
a dichotomous response, usually "agree/disagree" or "yes/no."
For example:
Television stations should editorialize.
-
Agree
-
Disagree
-
No opinion
While such questions provide little sensitivity to different
degrees of conviction, they are the easiest to tabulate of all
question forms. Whether they provide enough sensitivity is a
question the researcher must seriously consider.
The multiple-choice question allows respondents to choose an
answer from several options.
For
example:
In
general, television commercials tell the truth. . .
-
All of the time
-
Most of the time
-
Some of the time
-
Rarely
-
Never
Multiple-choice questions should include all possible responses.
A question that excludes any significant response usually
creates problems. For example:
What is your favorite television network?
-
Channel 1
-
Channel 2
-
Channel 3
Subjects who favor Channel 4 or 5 (although not networks
in the strictest sense of the word) cannot answer the question
as presented.
Additionally, multiple-choice responses must be mutually
exclusive: there should be only one response option per
question for each respondent. For instance:
How
many years have you been working in newspapers?
-
Less than one year
-
One to five years
-
Five to ten years
Which blank should a person with exactly five years of
experience check? One way to correct this problem is to reword
the responses, such as:
How
many years have you been working in the Cairo University?
-
Less than one year
-
One to five years
-
Six to ten years
Ratings scales are also widely used in social research. They can
be arranged horizontally or vertically:
There are too many commercials on TV.
-
Strongly agree (translated
as a 5 for analysis)
-
Agree (translated as a 4)
Neutral (translated as a 3)
-
Disagree (translated as a
2)
-
Strongly Disagree
(translated as a l)
What is your opinion of TV news?
Fair __ __ __ __ __ Unfair
(5) (4) (3) (2) (1)
Semantic differential scales are another form of rating scale
and are frequently used to rate persons, concepts, or objects.
These scales use bipolar adjectives with seven scale points:
How
do you perceive the term public television?
Good ---- ---- ---- ---- ---- ----
---- Bad
Happy ---- ---- ---- ---- ---- ----
---- Sad
Uninteresting ---- ---- ---- ---- ---- ---- ----
Interesting
Dull ---- ---- ----
---- ---- ---- ---- Exciting
In
many instances researchers are interested in the relative
perception of several concepts or items. In such cases the rank
ordering technique is appropriate. Here are several common
occupations. Please rank them in terms of their prestige. Put a
1 next to the profession that has the most prestige, a 2 next to
the one with the second most, and so on.
-
Police officer
-
Banker
-
Lawyer
-
Politician
-
TV reporter
-
Teacher
-
Dentist
-
Newspaper writer
Ranking of more than a dozen objects is not recommended because
the process can become tedious and the discriminations
exceedingly fine. Furthermore, ranking data imposes limitations
on the statistical analysis that can be performed.
The checklist question is often used in pilot studies to refine
questions for the final project.
For example:
What things do you look for in a new television set? (Check as
many as apply.)
-
Automatic fine tuning
-
Remote control
-
Large screen
-
Cable ready
-
Console model
-
Portable Stereo sound
-
Other _________
The
most frequently checked answers may be used to develop a
multiple-choice question; the unchecked responses are dropped.
Forced-choice questions are frequently used in media studies
designed to gather information about lifestyles and are always
listed in pairs. Forced-choice questionnaires are usually very
long — sometimes dozens of questions — and repeat questions (in
different form) on the same topic. The answers for each topic
are analyzed for patterns, and a respondent's interest in that
topic is scored.
A
typical forced-choice questionnaire might contain the following
pairs:
Select one statement from each of the following pairs of
statements:
¨
If I see an injured animal, I always try to help
it.
¨
If I see an injured animal, I figure that nature
will take care of it.
Respondents generally complain that neither of the responses to
a forced-choice question is satisfactory, but they have to
select one or the other. Through a series of questions on the
same topic (violence, lifestyles, career goals), a pattern of
behavior or attitude generally develops.
Fill-in-the-blank questions are used infrequently by survey
researchers.
However, some studies are particularly suited for
fill-in-the-blank questions. In advertising copy testing, for
example, they are often employed to test subjects' recall of a
commercial. After seeing, hearing, or reading a commercial,
subjects receive a script of the commercial in which a number of
words have been randomly omitted (often every fifth or seventh
word). Subjects are required to fill in the missing words to
complete the commercial. Fill-in-the-blank questions can also
be used in information tests. For example, "The senators
from your state are _____ and _____." Or, "The headline story on
the front page was about _____."
Tables, graphs, and figures are also used in survey research.
Some ingenious questioning devices have been developed to help
respondents more accurately describe how they think and feel.
The next page shows a simple picture scale for use with young
children, Figure 4.1.
Figure 4.1: A simple picture scale for use with young children

Some questionnaires designed for children use other methods to
collect information.
Since young children have difficulty in assigning numbers to
values, one logical alternative is to use pictures. For example,
the interviewer might read the question, "How do you feel about
Saturday morning cartoons on television?" and present the faces
to elicit a response from a 5-year-old. Zillmann and Bryant
(1975) present a similar approach in their "Yucky" scale. |