Surveys in Sociology Research

Ethical and practical considerations in applied research with human beings often mean that researchers are unable to experimentally manipulate independent variables to determine their effects. In such situations, survey research methodology allows researchers to gather and analyze data about phenomena of interest in order to help them better understand and explain the world around them. In survey research, participants are asked questions concerning their opinions, attitudes, or reactions through a structured data collection instrument for purposes of scientific analysis. These results are used to extrapolate the findings from the sample to the underlying population. Although there are a number of advantages to using survey research for data collection from human beings, there are also many disadvantages. Typically, survey research should be used only in those situations where data cannot be collected in other ways.

Applied research with human beings such as the kind that is done in many sociology studies precludes the manipulation of variables or random assignments to experimental groups for ethical reasons. For example, if one wanted to know the comparative effects of short-term and long-term unemployment on people, it would be unethical to randomly assign people to groups, fire one group from their jobs, and preclude from acquiring employment for a given period of time. Not only would such a study completely or partially stop the income for the persons in the experimental group, put their health and safety at risk due to potential inability to purchase food and shelter, and inversely impact their families, it would also result in various levels of psychological stress that could negatively impact them for the foreseeable future even once they were employed again. For this reason, survey research is often used to collect data from individuals already in whatever situation is of interest to the researcher. Survey research does not require the artificial external manipulation of variables (i.e., the experimenter has no control over who loses their job or how long they stay unemployed), but collects data from individuals who are already in the population of interest due to other factors (i.e., have already lost their jobs outside the scope of the research).

In general, a survey is a data collection instrument used to acquire information on the opinions, attitudes, or reactions of people. Examples of survey research are all around us and at some point in most of our lives we will participate in a study that uses surveys. The market research done at the mall where people are asked to participate in a blind taste test of two kinds of cola and answer a questionnaire specifying which they preferred and why, is a simple type of survey research. The survey run by the United States Census Bureau every 10 years to collect data about the homes and lifestyles of people across the country represents a more sophisticated type of survey research. Even the questionnaire "Is Your Spouse a Louse?" in a popular women's magazine is a type of survey (although it is not used as part of the survey research methodology and the research analysis and extrapolation is done just by the person responding to the instrument).

Survey research is a type of research in which data about the opinions, attitudes, or reactions of the members of a sample are gathered using a survey instrument. As opposed to experimental research, survey research does not allow for the manipulation of an independent variable. Survey research is a research study in which members of a selected sample are asked questions concerning their opinions, attitudes, or reactions are gathered using a survey instrument or questionnaire for purposes of scientific analysis; typically the results of this analysis are used to extrapolate the findings from the sample to the underlying population.

As shown in Figure 1, when used in scientific research, survey research follows the same general paradigm as any hypothesis testing or theory building process. Using the example above concerning the effects of unemployment on individuals and families, the researchers need to first determine what the goals of their study are. There are, for example, a wide range of consequences that someone may experience as a result of a period of unemployment and the concomitant lack of income, including inability to pay bills, loss of retirement savings and investments, change in diet (due to inability to buy the same types of food), loss of the esteem of others, feelings of inadequacy due to the inability to provide for one's family, and embarrassment from having to borrow money or sell possessions. Some of these results compound each other. The loss of self-esteem resulting from loss of employment, for example, may mean that the person has less self-confidence and does not present him/herself well in an interview, resulting in greater difficulty finding a job. The researchers need to determine which of these or other possible consequences of job loss they wish to investigate.

ors-soc-1474-126680.jpg

For example, the researchers may decide that they want to investigate both the physical and psychological consequences of unemployment. These terms, however, are rather nebulous and open-ended. The next step in the survey research methodology is to plan how the desired information will be collected. Specifically, in order to develop a good data collect instrument, they need to operationally define what they mean by "physical and psychological effects" of unemployment. They may start with their own knowledge and observations and add to these insights by interviewing people who have been unemployed to see what other effects might result from long-term unemployment. Based on this information, they would develop questions that would elicit the desired information from the survey participants. This means that they various factors in which they are interested need to be operationally defined and turned into unambiguous questions or items for inclusion on the survey. For example, survey items might include questions such as:

  • "How often do you feel 'blue'?"
  • "Do you have difficulty sleeping at night?"
  • "Do you have difficulty getting up in the morning?"
  • "Have you experienced any noticeable changes in your appetite since you lost your job?"

Using the principles of good psychometric question design, the researchers would then develop a survey instrument to be given to the sample that they have selected.

In addition to designing and developing the survey and selecting a sample, one must also determine how the survey will be delivered. Surveys can either be administered in written form through hard copy questionnaires that are mailed or given to prospective participants or administered by a trained interviewer in person or over the phone. Current survey methodology may also take advantage of newer technologies by administering the questionnaire over a cell phone, e-mail, or the Internet or by using a recording or using synthesized speech to administer the questions with respondents inputting their answers by punching in numbers on a telephone keypad or speaking the numbers which are then interpreted using voice recognition technology. Once survey data are collected, they are statistically analyzed to evaluate the responses and how they affect the researchers' theory. These results are then used to refine or expand the theory as necessary and as input for further research on the topic.

There are a number of advantages to using survey research to collect data. First, survey research methodology offers researchers a good way to collect data from a large number of participants. A survey can typically be comparatively easily and inexpensively administered to hundreds of participants as opposed to research in which variables are experimentally manipulated. Further, survey research can speed collection of data; rather than waiting for time to pass between the administration of the independent variable and the measurement of the dependent variable that is required by many experimental designs, survey research tends to ask about things that have happened in the past or that are currently occurring in the respondents' lives. In survey research there is no long waiting period to get the results. In addition, in some situations survey research is the only method available for data collection. It is impossible for ethical or practical considerations to manipulate variables for various topics (e.g., unemployment, death of a spouse). Survey research, however, can be used to gather information from individuals already experiencing the independent variable so that data otherwise unavailable can be gathered and analyzed. Finally, survey research is relatively cheap and easy, particularly when delivered through newer technologies. Costs associated with travel or experimental manipulation or variables are not required.

However, survey research also has a number of drawbacks. First, the researcher has no control over the experimental condition in survey research. Neither the value of the independent variable nor extraneous or intervening variables can be controlled. The most the researcher can do is to eliminate respondents from the final subject pool. In addition, survey research yields qualitative rather than quantitative data. Although nonparametric statistical methods can be used, these are more limited in scope than their parametric counterparts. Although nonparametric analysis can work with qualitative data that have no true zero or that are not based on a meaningful interval scale (e.g., what one person means by "dissatisfied" about an item in question and what another person may mean can be two completely different things). Further, it is easy for rating errors and other types of bias to taint the data. Poorly worded or ambiguous questions, response biases (e.g., never rating something as "excellent" because the person believes that everything can be improved), or differences in the way that multiple interviewers ask questions can all lead to inconsistent data. Although this latter problem can be overcome to some extent by training of interviewers and use of structured (rather than open-ended) survey instruments, human error will always taint the interview process. In addition, in most cases those responding to the survey instrument will care less about the results than will those who are asking the questions. Although longer surveys can ask series of questions to make sure that the data of interest is gathered, survey respondents tend to lose interest and not respond as carefully and honestly in such situations as the designers would like. Finally, there are very few situations in which survey participation can be forced. As a result, most surveys have a low response rate. This often means that the resultant sample -- no matter how carefully it was designed by the researchers -- is self-selected and prone to bias.

Applications

Sample selection is also part of the planning process for the survey research methodology. Certainly, one could just hang around the local unemployment office and ask everyone who comes in to answer the survey. However, this approach is also a way of selecting a sample, and is not necessarily the best way to get a representative sample of unemployed persons. Most people look for jobs online or the newspaper rather than going to an unemployment office. As a result, by using only people who go to an unemployment office the researchers would be restricting the kinds of people who will be chosen for the study to those who are in the office. This sample may not adequately represent the characteristics of the underlying population and may unintentionally bias the results. For example, if people who have been out of work for an extended period of time have decided that the unemployment office is not helpful in giving the viable jobs leads, they will stop coming to the unemployment office and, as a result, will not be included in the survey. Although this may be an acceptable situation if the researchers are interested only in the reactions of the newly unemployed, it is not an acceptable situation if the researchers are interested in the reactions of the long-term unemployed. One way to help alleviate this problem would be to collect demographic data on the survey such as the length of time the person has been unemployed. By collecting this information, the researchers could eliminate any participants who participated in the survey but who did not meet the requirement for having been unemployed for a pre-specified length of time. Another way to select a sample would be to randomly select from a population of people who are likely to be unemployed such as those who are at a shopping mall in the middle of the day or from a list of people who signed up for unemployment benefits. Random selection has the advantage that it will more than likely (based on the laws of probability) be representative of the underlying population. Such random selection helps alleviate the potential problem of asking people to participate in the survey who are not actually unemployed, although it does not completely eliminate it. People at the mall, for example, may work part-time, work from home, or be having a day off; people on the unemployment office list may have already found a job while others may not be on the list because they have been unemployed too long to qualify for benefits.

Another way to select samples is through systematic sampling where the researcher selects every nth person who walks through the door of the unemployment office or every nth name on the list of those who signed up for unemployment benefits. Sometimes, however, systematic sampling results in a sample that is self-selected on certain characteristics (e.g., there is a time limit for unemployment benefits; this means that people who are unemployed for a long period of time will not be included in the sample). To help ensure that the correct proportions of different demographics are included in the sample, one could define a stratified random sample. In this approach, the characteristics of participants is determined a priori (e.g., people unemployed 1-6 weeks, 7-12 weeks, 13 or more weeks). Within each of these subgroups (i.e., strata) a sample is randomly chosen in proportion to the proportion of that strata in the underlying population. This approach helps ensure that the subgroups of interest are included in the sample, but has the potential drawback of introducing bias in some instances.

When selecting a sample, it is important not to introduce bias into the sample so that the results truly represent the characteristics of the underlying population. In statistical terms, bias is the tendency for a given experimental design or implementation to unintentionally skew the results of the experiment. Selection bias occurs when the sample is selected in a way that is not representative of the underlying population. For example, as discussed above, recruiting participants in the study only from people who are present in an unemployment office may unfairly eliminate people who have been unemployed for a longer period of time. However, even with the best of intentions and the most rigorous sampling methods on the part of the researchers, bias can still be introduced into a sample. In very few situations is it possible to actually compel people to participate in a survey. In such situations, therefore, bias can be introduced into a sample through self-selection. This is a condition when members of the sample refuse to participate in the survey. For example, when the survey is delivered by mail, participants are free to complete the survey or not (in the great majority of the cases they do not). Similarly, relatively few people agree to participate in a survey over the phone. In many cases, those who self-select out of participating in the survey may have different characteristics than those who self-select to participate (e.g., desire for privacy, embarrassment at being unemployed. As a result, the self-selected sample chosen is often biased.

Conclusion

Applied research with human beings frequently precludes the manipulation of variables or random assignment for ethical or practical reasons. However, survey research allows researchers to collect data about human behavior and opinions in most such situations. In survey research participants are asked questions concerning their opinions, attitudes, or reactions through a structured data collection instrument for purposes of scientific analysis. These results are used to extrapolate the findings from the sample to the underlying population. Although there are a number of advantages to using survey research for data collection from human beings, there are also many disadvantages. Survey research should be used only in those situations where data cannot be collected in other ways.

Terms & Concepts

Bias: The tendency for a given experimental design or implementation to unintentionally skew the results of the experiment due to a nonrandom selection of participants.

Data: (sing. datum) In statistics, data are quantifiable observations or measurements that are used as the basis of scientific research.

Demographic Data: Statistical information about a given subset of the human population such as persons living in a particular area, shopping at an area mall, or subscribing to a local newspaper. Demographic data might include such information as age, gender, or income distribution.

Ethics: In scientific research, a code of moral conduct regarding the treatment of research subjects that is subscribed to by the members of a professional community. Many professional groups had a specific written code of ethics that sets standards and principles for professional conduct and the treatment of research subjects.

Hypothesis: An empirically-testable declaration that certain variables and their corresponding measure are related in a specific way proposed by a theory.

Independent Variable: The variable in an experiment or research study that is intentionally manipulated in order to determine its effect on the dependent variable (e.g., the independent variable of type of cereal might affect the dependent variable of the consumer's reaction to it).

Operational Definition: A definition that is stated in terms that can be observed and measured.

Population: The entire group of subjects belonging to a certain category (e.g., all women between the ages of 18 and 27; all dry cleaning businesses; all college students).

Probability: A branch of mathematics that deals with estimating the likelihood of an event occurring. Probability is expressed as a value between 0 and 1.0, which is the mathematical expression of the number of actual occurrences to the number of possible occurrences of the event. A probability of 0 signifies that there is no chance that the event will occur and 1.0 signifies that the event is certain to occur.

Sample: A subset of a population. A random sample is a sample that is chosen at random from the larger population with the assumption that such samples tend to reflect the characteristics of the larger population.

Skewed: A distribution that is not symmetrical around the mean (i.e., there are more data points on one side of the mean than there are on the other).

Survey: (a) A data collection instrument used to acquire information on the opinions, attitudes, or reactions of people; (b) A research study in which members of a selected sample are asked questions concerning their opinions, attitudes, or reactions are gathered using a survey instrument or questionnaire for purposes of scientific analysis; typically the results of this analysis are used to extrapolate the findings from the sample to the underlying population; (c) to conduct a survey on a sample.

Survey Research: A type of research in which data about the opinions, attitudes, or reactions of the members of a sample are gathered using a survey instrument. The phases of survey research are goal setting, planning, implementation, evaluation, and feedback. As opposed to experimental research, survey research does not allow for the manipulation of an independent variable.

Variable: An object in a research study that can have more than one value. Independent variables are stimuli that are manipulated in order to determine their effect on the dependent variables (response). Extraneous variables are variables that affect the response but that are not related to the question under investigation in the study.

Bibliography

Arsham, H. Questionnaire design and surveys sampling. University of Baltimore Website Retrieved September 11, 2007 from: http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm

Axinn, W., Link, C., & Groves, R. (2011). Responsive survey design, demographic data collection, and models of demographic behavior. Demography, 48(3), 1127-1149. Retrieved November 4, 2013 from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=62909910

Lavrakas, P. J., Shuttles, C. D., Steeh, C., & Fienberg, H. (2007). The state of surveying cell phone numbers in the United States. Public Opinion Quarterly, 71 (5), 840-854. Retrieved March 31, 2008 from EBSCO online database Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=28452979&site=ehost-live

Porter, J., & Ecklund, E. (2012). Missing data in sociological research: An overview of recent trends and an illustration for controversial questions, active nonrespondents and targeted samples. American Sociologist, 43(4), 448-468. Retrieved November 4, 2013 from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=84368865

Porter, S. R. & Whitcomb, M. E. (2007, Win). Mixed-mode contacts in web surveys. Public Opinion Quarterly, 71 (4), 635-648. Retrieved March 31, 2008 from EBSCO online database Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=27772652&site=ehost-live

Smith, S. N., Fisher, S. D., & Heath, A. (2011). Opportunities and challenges in the expansion of cross-national survey research. International Journal of Social Research Methodology, 14(6), 485-502. Retrieved November 4, 2013 from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=66808571

Suggested Reading

Brick, J. M., Edwards, W. S., & Lee, S. (2007). Sampling telephone numbers and adults, interview length, and weighting in the California health interview survey cell phone pilot study. Public Opinion Quarterly, 71 (5), p793-813. Retrieved March 31, 2008 from EBSCO online database Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=28452977&site=ehost-live

Brown, G., Weber, D., Zanon, D., & de Bie, K. (2012). Evaluation of an online (opt-in) panel for public participation geographic information systems surveys. International Journal Of Public Opinion Research, 24(4), 534-545. Retrieved November 4, 2013 from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=83932619

Naidoo, K. (2008, Jan). Researching reproduction: Reflections on qualitative methodology in a transforming society. Forum: Qualitative Social Research, 9 (1), 1-16. Retrieved March 31, 2008 from EBSCO online database SocINDEX with Full Text: http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=29973417&site=ehost-live

Olson, K., Smyth, J. D., & Wood, H. M. (2012). Does giving people their preferred survey mode actually increase survey participation rates? An experimental examination. Public Opinion Quarterly, 76(4), 611-635. Retrieved November 4, 2013 from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=83746284

Saliba, D. et.al. (2001, Dec). The vulnerable elders survey: A tool for identifying vulnerable older people in the community. Journal of the American Geriatrics Society, 49 (12), 1691-1699. Retrieved March 19, 2008 from EBSCO online database Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=5929191&site=ehost-live

Essay by Ruth A. Wienclaw, PhD

Ruth A. Wienclaw holds a doctorate in industrial/organizational psychology with a specialization in organization development from the University of Memphis. She is the owner of a small business that works with organizations in both the public and private sectors, consulting on matters of strategic planning, training, and human/systems integration.