Interviews

Abstract

In survey research, data collection instruments can be administered either as questionnaires in which there is no intervention between the data collection instrument and the subject or as an interview conducted by a human being. The intervention of a human interviewer between the survey instrument and the subject's responses has both advantages and disadvantages. In particular, interviewer bias and interviewer effects can impact the quality of the data obtained from the subject. However, interviewing techniques—in particular, cognitive interviewing that employs probing or "think aloud" methodologies—can improve the quality of the data gathered in a survey. These methods also help survey designers develop better survey instruments and help researchers better understand and predict human behavior. When properly used, interviewing techniques can give researchers both a depth and breadth of information that cannot be gathered through other data collection techniques.

Overview

From a research point of view, interviews are part of the survey research methodology. In this type of research, data about the opinions, attitudes, or reactions of the members of a sample are gathered using a survey instrument. Surveys can be administered in one of two ways. The data collection instrument can be presented in the form of a questionnaire that the subject responds to without the intervention of another human being (e.g., researcher, interviewer). Surveys presented as questionnaires can be administered as paper-and-pencil instruments (e.g., through the mail or distributed by hand) or electronically (e.g., on a website or by email). Surveys administered as interviews employ the same data collection instrument but are administered by an interviewer asking questions of the subject either in person or over the telephone.

In interviews, the interviewer directs the conversation with the subject for the purpose of gathering specific information. Interviews can range from highly structured formats that use questions that are specifically worded and administered in a prescribed order from which the interviewer may not deviate to very unstructured formats in which interviewers only follow a general form and are allowed great latitude in what specific data are collected or what follow-up questions are asked. As compared to questionnaire administration, surveys can result in a higher response rate because people have more difficulty turning down a person asking for a few minutes of their time than they do in throwing out a questionnaire. In addition, an interviewer can probe for further information, whereas a questionnaire cannot. On the other hand, interviews are more expensive than questionnaires, particularly when one is attempting to collect data from a large sample.

Although survey research is used in social science research, it typically does not produce the same type of quantitative data as is generated by experimental research. Sometimes surveys are designed so that the results can be expressed in numerical form and, therefore, can potentially be analyzed using inferential statistics. However, even for those surveys that are designed so that the subjects' responses can be quantified (e.g., "on a scale from 1 to 10, rate what you think about X"), the results often violate the assumptions (e.g., true zero, interval, or ratio scale) of many of the inferential statistical tools that would be used to analyze them. In general, survey research belongs to the realm of qualitative research in which observations cannot be or are not quantified (i.e., expressed in numerical form).

This does not mean, however, that surveys cannot be used as part of the scientific method. The questions investigated in behavioral and social science research are often extremely complex, and survey research methodology can be employed as part of the inductive reasoning process to better understand the importance of and relationship between variables observed in the real world. The results of this process can often later be used to experimentally test scientific theories as part of the scientific method. Further, there are often ethical considerations that prevent social and behavioral scientists from intentionally manipulating variables and collecting data that can be analyzed using inferential statistics (e.g., to better understand the relationship between personal support systems and recovery after the death of a loved one, it would be both unethical and illegal to experimentally manipulate whether or not the loved ones of the subjects lived or died). Survey research can be used to ethically gather information that could not otherwise be obtained. Also, as shown in Figure 1, it can be used with the paradigms of the scientific method and theory-building process to help the researcher better understand observed behavior.

ors-soc-1112-126621.jpg

Issues

Interviewer Bias. Like any research methodology, interviews have both advantages and disadvantages and are subject to various pitfalls. As opposed to a survey instrument that is administered as a questionnaire which, therefore, presents questions to all subjects in a standardized format, the interview paradigm introduces an additional extraneous variable that can negatively or positively influence the results: the interviewer. This influence can come from a variety of sources. Interviewer bias occurs when the individual administering the interview has certain expectations, beliefs, prejudices, or other attitudes that may affect the interview process and the subsequent interpretation of data collected through the interview process. If, for example, an interviewer thinks that women are unable to give thoughtful, long responses to interview questions, he or she may ask the questions of women in such a way that they are unlikely to give long responses but ask questions of men in a different way that encourages them to do the opposite. The possibility of interviewer bias is of particular concern when using an unstructured interview. In such interviews, questions are open-ended rather than forced choice in order to allow the interviewer to probe for more information or to allow the subject to think aloud. Although the purpose of the unstructured interview is to gather additional information and higher-quality data, a biased or untrained interviewer can effectively produce the opposite effect.

Interviewer Effects. Specific biases on the part of the interviewer are not the only things that can impact the quality of the data engendered in an interview. Interviewer effects are the influence of the interviewer's behaviors and attributes on the subject's response in an interview situation. For example, the appearance, demeanor, training, age, gender, and ethnicity of the interviewer may all affect the way that a subject perceives the interview or responds to the questions during an interview. In some cases, the subject may try to please the interviewer by giving responses that he or she thinks the interviewer may want to hear or, in other cases, may give non-responsive answers in order to negatively impact the value of the data collected by an interviewer that he or she does not like. Research has found, for example, that female subjects give more feminist responses to female interviewers than they do to male interviewers. Likewise, African Americans tend to give more detailed responses concerning race-related issues to African American interviewers than to those of other races. Furthermore, in some cases, gaining the perspective of a child in a certain study is crucial, and experts have cautioned that additional ethical efforts often need to be made to make a young interviewee comfortable to participate in such an interview. Some researchers suggest that there is an inherently unequal balance of power felt by children serving as interviewees and that researchers must work to establish trust and appear less intimidating to more efficiently get results (Kutrovátz, 2017). Overall, it has been suggested that, in addition to the effects of the interviewer, an interviewer should make sure to consider the environment in which they are conducting the interview and its possible effects on the interviewee, and that they should adjust their approach to the interview accordingly (Ecker, 2017).

Interviewer-Interviewee Interactions. The way that the interviewer interacts with the subject can have an impact on a structured interview, as well. A surly or condescending interviewer can easily create a hostile environment in which the subject is unlikely to give additional information or, sometimes, even to answer basic survey questions honestly. However, the same can be true for an interviewer who is too friendly. Although friendliness can lead to a situation in which the interviewer can gather more valuable information from the subject, it can also lead to a situation in which the subject attempts to give information that he or she thinks will please the interviewer rather than giving responses that truly represent his or her real opinions. In many instances, this problem can be overcome through interviewer training in which interviewers learn how to ask questions in a neutral way, effectively probe for additional information, and not let their own beliefs or opinions bias the interview results. Following the COVID-19 pandemic, as more jobs remained remote, more interviews were conducted remotely as well over platforms such as Zoom, Google Meet, and Microsoft Teams. Virtual interviews brought new issues interviewer-interviewee interactions. Companies and prospective employees were forced to alter traditional views on the interview process to accommodate new virtual standards (Mann, 2021).

Determining Reliability. Because of these potential problems with collecting data via interviews, it is important to take the characteristics of both the interviewer and the subjects into account when designing a research study that utilizes interviews as a data collection tool. One of the ways that this can be done is to determine the inter-interviewer reliability when more than one interviewer is used to collect data. In general, reliability is the degree to which a data collection or assessment instrument consistently measures a characteristic or attribute. The characteristics of the different interviewers and the way that they ask questions or interact with the subjects can create differences in the results that they obtain. One way to determine the inter-interviewer reliability when more than one interviewer is used is to have all interviewers administer the interview to the same small sample of subjects and then to statistically analyze the subjects' responses to determine whether or not there was a difference in their answers depending on who had asked the questions. If it is found that subjects react differently to some of the interviewers, the researcher could give the interviewers additional training or even eliminate some of the interviewers from the study. It is important that the survey instrument is reliable and generates approximately the same results every time it is administered, no matter who the interviewer is. Otherwise, the survey is not reliable, which means, in turn, that it also cannot be valid (i.e., it is not collecting the data that it was designed to collect). In such a situation, the data collected are not of use to the researcher.

Applications

When properly used, interviewing techniques can give the researcher both a depth and breadth of information that cannot be gathered using other data collection techniques. However, as discussed above, the use of interviews is not without potential pitfalls. In addition to the effect of the interviewer on the subject and rating errors by the subject, the validity of an interview is dependent on the thoroughness and clarity with which the survey instrument is designed. Further, since survey instruments are often exploratory in nature, a survey instrument may not include questions on all the variables that underlie behavior. Although a structured survey instrument in which the interviewer is not allowed to deviate can help increase the reliability and validity of an instrument, it can also frequently yield a long and complex list of questions to which subjects are not willing to respond. Limiting which questions are be asked can also suppress important data that can aid in the building of a more realistic model of behavior or otherwise aid the researcher in understanding the complex interactions of variables.

Cognitive Interviewing. To help overcome such shortcomings of traditional interview methods, the social sciences have begun to incorporate cognitive interviewing techniques into their toolkit to aid researchers and practitioners in gathering information from subjects. Cognitive interviewing is a structured interview technique in which the interviewer guides the interview to collect additional information regarding the subject's response to the survey questions in order to help determine if the question is successful in garnering the intended information. The subject may be asked to recall everything about a situation (even if it seems irrelevant), recall events in a different order, or recall from a different perspective in order to enhance recollection. Cognitive interviewing relies on the principles of cognition and memory retrieval.

In their analysis of the state of the art in cognitive interviewing, Beatty and Willis (2007) note that there is wide variation in the way that this technique is implemented in the field. There are two primary paradigms for cognitive interviewing. In probing, the interviewer encourages the subject to impart their thoughts without affecting the generation of such information. In the "think aloud" approach, the interviewer more proactively guides the process, actively prodding subjects to share their thoughts and simultaneously answer the survey (e.g., "tell me what you are thinking") and follow-up probes (e.g., "can you rephrase the question for me?"). The purpose of both approaches to cognitive interviewing is to collect additional information that normally would not be acquired so as to determine if the survey questions are meeting desired objectives. In addition, hybrid approaches to cognitive interviewing have also been developed that include characteristics of both the probing and thinking-aloud methods.

Although both paradigms of cognitive interviewing have the same overarching goal, their assumptions and implementations differ. In the think-aloud paradigm, questions and procedures are relatively standardized, thereby reducing the probability of introducing interviewer bias into the data collection process. In addition, in this paradigm, interviewers do not need to be familiar with the design of the survey instrument or the content of its specific questions in order to guide the subject in offering additional information. Further, researchers have noted that the think-aloud paradigm tends to be less artificial than the probing paradigm. In addition, since "think aloud" data are produced during the response process rather than after the completion of the response as occurs with probing, it has also been suggested that this paradigm provides more pure responses. However, an additional body of evidence questions whether the responses gathered from thinking aloud truly are literal reflections of the subject's thought processes. Cognitive interviewing techniques have been employed often in surveys conducted by the federal government, as well as in interviewing witnesses to crimes—an area that has been especially problematic in the retrieval of accurate information (Mcleod, 2023).

Probing. The probing paradigm, too, offers researchers certain advantages. Research has shown that probing can help focus the subjects' behavior. In addition, since probing occurs after the question is answered, it does not interfere with the response process. In the think-aloud paradigm, this is not true. It is theorized in the literature that the use of probing may create a situation with less interference in the subject's response process while still being able to ensure the additional data stored in the subject's short-term memory. In addition, the use of probing by the interviewer can be of great help to the designer of the survey instrument or by generating additional information that would not otherwise be gathered using the survey instrument. However, probing introduces a greater possibility of interviewer bias and effect into the interviewing situation. It is not merely the use of probing that generates better information for the researcher. The quality of the data gathered through probing is dependent on the skill of the interviewer doing the probing. An interviewer unskilled in probing may actually obfuscate the data rather than clarify it.

Conclusion

Survey research is an important tool for social and behavioral science researchers. Surveys can allow researchers to gather data not otherwise available for ethical or practical reasons. These data can be used as part of the scientific method to help researchers better understand and predict behavior. Surveys and questionnaires that are administered electronically and over the Internet remove the human element from gathering data removing opportunities for bias; however, interviews can be inherently less objective. As part of survey research methodology, interviews have the same advantages and disadvantages as questionnaires. The intervention of a human interviewer between the survey instrument and the subject's responses can similarly be both advantageous and disadvantageous. In particular, it is important to train interviewers in an attempt to minimize the impact of interviewer bias and interviewer effects on the results of the study. Similarly, when using multiple interviewers, it is important to determine the inter-interviewer reliability between interviewers and determine both the reliability and validity of the survey instrument. Since the late twentieth century, the methods of cognitive interviewing (including probing and thinking aloud) have been incorporated into many interviews. These techniques can help interviewers gather better information, help survey designers craft better survey instruments, and help researchers and practitioners better understand human behavior. When properly used, interviewing techniques can give the researcher both a depth and breadth of information that cannot be gathered using other data collection techniques.

Terms & Concepts

Cognitive Interview: A structured interview technique in which the interviewer guides the conversation with the subject in order to collect additional information regarding the subject's response to the survey questions. The subject may be asked to recall everything about a situation (even if an item seems irrelevant), recall events in a different order, or recall from a different perspective in order to enhance recollection. The two basic techniques of cognitive interviewing are probing and thinking aloud. Cognitive interviewing relies on the principles of cognition and memory retrieval.

Experiment: A situation under the control of a researcher in which an experimental condition (independent variable) is manipulated, and the effect on the experimental subject (dependent variable) is measured. Most experiments are designed using the principles of the scientific method and are statistically analyzed to determine whether the results are statistically significant.

Inductive Reasoning: A type of logical reasoning in which inferences and general principles are drawn from specific observations or cases. Inductive reasoning is a foundation of the scientific method and enables the development of testable hypotheses from particular facts and observations.

Inter-Interviewer Reliability: The consistency with which different interviewers obtain similar responses from subjects using the same interview instrument. Interviewer bias and interviewer effects can lead to low inter-interviewer reliability.

Interview: In survey research, an interview is a data collection technique in which the researcher directs a conversation with the subject for the purpose of gathering specific information. Interviews can range from highly structured instruments (with questions that are specifically worded and administered in a prescribed order from which the interviewer may not deviate) to unstructured (in which interviewers only follow a general form and are allowed great latitude in what specific data are collected or what follow-up questions they are allowed to ask). Interviews can be carried out in person or over the telephone.

Interviewer Bias: The expectations, beliefs, prejudices, or other attitudes of the interviewer that may affect the interview process and the subsequent interpretation of data collected through the interview process.

Interviewer Effects: The influence of the interviewer's behaviors and attributes on the subject's response in an interview situation. For example, appearance, demeanor, training, age, gender, and ethnicity may all affect the way that a subject perceives the interview or responds to questions during an interview. In some cases, the subject may try to please the interviewer by giving responses that he or she thinks the interviewer may want to hear or, in other cases, may give non-responsive answers in order to negatively impact the value of the data collected by an interviewer that he or she does not like.

Qualitative Research: Scientific research in which observations cannot be or are not quantified (i.e., expressed in numerical form).

Reliability: The degree to which a data collection or assessment instrument consistently measures a characteristic or attribute. An assessment instrument cannot be valid unless it is reliable.

Scientific Method: General procedures, guidelines, assumptions, and attitudes required for the organized and systematic collection, analysis, interpretation, and verification of data that can be verified and reproduced. The goal of the scientific method is to articulate or modify the laws and principles of a science. Steps in the scientific method include problem definition based on observation and review of the literature, formulation of a testable hypothesis, selection of a research design, data collection and analysis, extrapolation of conclusions, and development of ideas for further research in the area.

Subject: A participant in a research study or experiment whose responses are observed, recorded, and analyzed.

Survey: (a) A data collection instrument used to acquire information on the opinions, attitudes, or reactions of people; (b) a research study in which members of a selected sample are asked questions concerning their opinions, attitudes, or reactions, which are gathered using a survey instrument or questionnaire for the purpose of scientific analysis--typically the results of this analysis are used to extrapolate the findings from the sample to the underlying population; (c) to conduct a survey on a sample.

Survey Research: A type of research in which data about the opinions, attitudes, or reactions of the members of a sample are gathered using a survey instrument. The phases of survey research are goal setting, planning, implementation, evaluation, and feedback. As opposed to experimental research, survey research does not allow for the manipulation of an independent variable.

Validity: The degree to which a survey or other data collection instrument measures what it purports to measure. A data collection instrument cannot be valid unless it is reliable.

Bibliography

Beatty, P. C. & Willis, G. B. (2007, Sum). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71, 287311. Retrieved 21 April 2008, from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=25977055&site=ehost-live

Blair, J., & Conrad, F. G. (2011). Sample size for cognitive interview pretesting. Public Opinion Quarterly, 75, 636-658. Retrieved November 4, 2013, from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=67133709&site=ehost-live

Brunton-Smith, I., Sturgis, P., & Williams, J. (2012). Is success in obtaining contact and cooperation correlated with the magnitude of interviewer variance?. Public Opinion Quarterly, 76, 265-286. Retrieved November 4, 2013, from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=84469247&site=ehost-live

Ecker, J. (2017). A reflexive inquiry on the effect of place on research interviews conducted with homeless and vulnerably housed individuals. Forum: Qualitative Social Research, 18(1), 149–168. Retrieved October 22, 2018, from EBSCO Online Database Sociology Source Ultimate. http://search.ebscohost.com/login.aspx?direct=true&db=sxi&AN=120541406&site=ehost-live&scope=site

Goerman, P. L., & Caspar, R. A. (2010). A preferred approach for the cognitive testing of translated materials: Testing the source version as a basis for comparison. International Journal of Social Research Methodology, 13, 303-316. Retrieved November 4, 2013, from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=53564593&site=ehost-live

Kutrovátz, K. (2017). Conducting qualitative interviews with children—Methodological and ethical challenges. Corvinus Journal of Sociology & Social Policy, 8(2), 65–88. Retrieved October 22, 2018, from EBSCO Online Database Sociology Source Ultimate. http://search.ebscohost.com/login.aspx?direct=true&db=sxi&AN=126989652&site=ehost-live&scope=site

Mann, L. (2021, August 17). Council Post: Five Tips For Virtual Interviews: How Zoom Changes Hiring. Forbes. Retrieved May 29, 2023, from https://www.forbes.com/sites/forbescommunicationscouncil/2021/08/17/five-tips-for-virtual-interviews-how-zoom-changes-hiring/?sh=3785e9fb65eb

Mcleod, S. (2023, Feb. 24). Cognitive Interview Technique. Simply Psychology. Retrieved May 26, 2023, from https://www.simplypsychology.org/cognitive-interview.html

Powell, M. B., Hughes-Scholes, C. H., & Sharman, S. J. (2012). Skill in interviewing reduces confirmation bias. Journal of Investigative Psychology & Offender Profiling, 9, 126-134. Retrieved November 4, 2013, from EBSCO Online Database SocINDEX with Full Text. http://search.ebscohost.com/login.aspx?direct=true&db=sih&AN=76372168&site=ehost-live

Schaefer, R. T. (2002). Sociology: A brief introduction (4th ed.). Boston: McGraw-Hill.

Suggested Reading

Anyan, F. (2013). The influence of power shifts in data collection and analysis stages: A focus on qualitative research interview. Qualitative Report, 18, 19. Retrieved November 4, 2013 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=87514136&site=ehost-live&scope=site

Gobo, G. (2006, Oct). Set them free: Improving data quality by broadening the interviewer's tasks. International Journal of Social Research Methodology, 9, 279301. Retrieved April 21, 2008 from EBSCO Online Database Academic Search Complete: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=22373085&site=ehost-live&scope=site

Jacob, S. A., & Paige Furgerson, S. S. (2012). Writing interview protocols and conducting interviews: Tips for students new to the field of qualitative research. Qualitative Report, 17, 110. Retrieved November 4, 2013 from EBSCO Online Database Academic Search Premier. http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=88905599&site=ehost-live&scope=site

Leighton, J. P. (2017). Using think-aloud interviews and cognitive labs in educational research. New York, NY: Oxford University Press.

Olson, K. & Peytchev, A. (2007, Sum). Effect of interviewer experience on interview pace and interviewer attitudes. Public Opinion Quarterly, 71, 273-286. Retrieved April 21, 2008 from EBSCO Online Database Academic Search Premier: http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=25977054&site=ehost-live&scope=site

Waddington, P. A. J. & Bull, R. (2007, Sum). Cognitive interviewing as a research technique. Social Research Update, 50, 14. Retrieved April 21, 2008 from EBSCO Online Database SocIndex: http://search.ebscohost.com/login.aspx?direct=true&db=snh&AN=26746361&site=ehost-live&scope=site

Essay by Ruth A. Wienclaw, Ph.D.

Dr. Ruth A. Wienclaw holds a doctorate in industrial/organizational psychology with a specialization in organization development from the University of Memphis. She is the owner of a small business that works with organizations in both the public and private sectors, consulting on matters of strategic planning, training, and human/systems integration.