Ethics and participant rights in experimentation

  • DATE: 1960s forward
  • TYPE OF PSYCHOLOGY: Psychological methodologies

SIGNIFICANCE: One of the tasks of government is to protect people from exploitation and abuse, including potential abuse by unethical researchers. American society has instituted several levels of control over research, thus ensuring that experimental ethics reflect the ethics of society at large. Still, few ethical decisions are easy, and many remain controversial.

Introduction

A primary task of the government is to protect people from exploitation. Since scientists are sometimes in a position to take advantage of others and have occasionally done so, the government has a role in regulating research to prevent the exploitation of research participants. On the other hand, excessive regulation can stifle innovation; if scientists are not allowed to try new, potentially risky experimental techniques, science will not progress, and neither will human understanding. This puts the government in a difficult position: since research topics, scientific methodology, and public attitudes are continuously changing, it would be impossible to write a single law or set of laws defining which research topics and methods are acceptable and which are not. As soon as such a law was written, it would be out of date or incomplete.

In 1974, President Nixon signed the National Research Act into law following public outcry concerning the Tuskegee Syphilis Study. In this unethical study, hundreds of Black men with syphilis were left untreated for forty years. The National Research Act established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which aimed to eliminate unethical practices like these conducted in the name of science. Despite some updates to the Act, some scientists pointed out a severe need for an extensive overhaul of the Act in the late 2010s and early 2020s.

Institutional Review Boards

The United States Congress has addressed this issue of research ethics by letting local communities determine what research with human participants is and is not appropriate according to contemporary local standards. Today, each institution conducting research must have a committee called an institutional review board (IRB) consisting of a minimum of five members, all of whom belong to the local community. To ensure that the committee is kept up to date on current human research methodologies, the IRB membership must include at least one scientist. At least one member must represent the general public and have no official or unofficial relationship with the institution where the research is taking place. A single person may fill multiple roles. IRBs are also required to ensure that the board consists of both men and women and includes representatives from a variety of professions.

Each IRB is required to review written proposals for all local research on human participants before that research can begin. At most large institutions, the IRB has enough staffing to break into subcommittees to review proposals from different areas. It is the job of the IRB to ensure that unethical research is screened out before it starts. Government agencies that fund research projects will not consider a proposal until it has been approved by the local IRB, and if research is conducted at an institution without IRB approval, the government can withhold all funds to that institution, even funds unrelated to the research.

To evaluate all aspects of a proposed research project, the IRB must have sufficient information about the recruitment of participants, the methods of the study, the procedures that will be followed, and the qualifications of the researchers. The IRB also requires that proposals include a copy of the informed consent contract that each potential participant will receive. This contract allows potential participants to see, in writing, a list of all possible physical or psychological risks that might occur as a result of participation in the project. Potential participants cannot be coerced or threatened into signing the form, and the form must tell participants that even if they agree to begin the research study, they may quit at any time for any reason. Informed consent contracts must be written in nontechnical prose that can be understood by any potential participant; it is generally recommended that contracts use vocabulary consistent with an eighth-grade education.

Except for the file holding the signed contracts between the researcher and the participants, the names of participants generally do not appear anywhere in the database or in the final written documents describing the study results. Data are coded without using names, and in the informed consent contract, participants are assured of the complete anonymity of their responses or test results unless there are special circumstances that require otherwise. If researchers intend to use information in any way that may threaten participants’ privacy, this issue needs to be presented clearly in the informed consent contract before the study begins.

Deception

Occasionally, researchers in psychology use a form of deception by telling the participants that the study is about one thing when it really is about something else. Although it is usually considered unethical to lie to participants, deception is sometimes necessary because participants may behave differently when they know what aspect of their behavior is being watched. This is called a demand characteristic of the experimental setting. More people will probably act helpful, for example, when they know that a study is about helpfulness. A researcher studying helpfulness thus might tell participants that they are going to be involved in a study of something else, such as reading. Participants might then be asked to wait in a room until they are each called into the test room. When the first name is called, a person may get up and trip on their way out of the room. In actuality, the person who tripped is the experimenter’s assistant, although none of the other participants knows that, and the real point of the research is to see how many of the participants get up to help the person who fell down. In these situations, where demand characteristics would be likely, IRBs will allow deception to be used as long as the deception is not severe, and the researchers debrief participants at the end by explaining what was really occurring. After deception is used, experimenters must be careful to make sure that participants do not leave the study feeling angry at having been “tricked”; ideally, they should leave feeling satisfaction for having contributed to science.

Even when participants have not been deceived, researchers must give an oral or written debriefing at the end of the study. Researchers are also obliged to ensure that participants can get help if they experience any negative effects from their participation in the research. Ultimately, if a participant feels that they were somehow harmed or abused by the researcher or the research project, a civil suit can be filed to claim compensation. Since participants are explicitly told that they can drop out of a study at any time for any reason, however, such long-term negative feelings should be extremely rare.

Special Issues in Clinical Trials

Clinical psychology is perhaps the most difficult area in which to make ethical research decisions. One potential problem in clinical research that is usually not relevant for other research settings is that of getting truly informed consent from the participants. Participants in clinical research are explicitly selected because they meet the criteria for some mental or emotional condition. By ensuring that participants meet the relevant criteria, researchers ensure that their study results will be relevant to the population that suffers from the disorder; on the other hand, depending on the disorder being studied, participants may not be capable of giving informed consent. A person with some conditions, such as schizophrenia, dementia, Alzheimer’s disease, or some other condition that precludes full comprehension of the situation, cannot be truly “informed.” In the cases of individuals declared incompetent by the courts, a designated guardian can give informed consent for participation in a research study. There are also cases of participants being legally competent yet incapable of truly understanding the consequences of what they read. Authority figures, including doctors and psychologists, can have a dramatic power over people; that power is likely to be even stronger for someone who is not in full control of their life, who has specifically sought help from others, and who is trusting that others have their best interests in mind.

Another concern about clinical research is the susceptibility of participants to potential psychological damage. The typical response of research participants is positive: they feel they are getting special attention and respond with healthy increases in self-esteem and well-being. A few, however, may end up feeling worse. For example, if they feel no immediate gain from the treatment, they may label themselves as “incurable” and give up, leading to a self-fulfilling prophecy.

A third concern in clinical research regards the use of control or placebo treatments. Good research designs always include both a treatment group and a control group. When there is no control group, changes in the treatment group may be attributed to the treatment when, in fact, they may have been caused by the passage of time or by the fact that participants were getting special attention while in the study. Although control groups are necessary to ensure that research results are interpreted correctly, the dilemma that arises in clinical research is that it may be unethical to assign people to a control group if they need some kind of intervention. One way of dealing with this dilemma is to give all participants some form of treatment and to compare the different treatment outcomes to one another rather than to a no-treatment group. This works well when there is already a known treatment with positive effects. Not only are there no participants who are denied treatment, but the new treatment can also be tested to see if it is better than the old one rather than simply better than nothing. Sometimes, if there is no standard treatment for comparison, participants assigned to the control group are put on a “waiting list” for the treatment; their progress without treatment is then compared with that of participants who are getting treatment right away. To some extent, this mimics what happens in nonresearch settings, as people sometimes must wait for therapy, drug abuse counseling, and so on. On the other hand, in nonresearch settings, those who get assigned to waiting lists are likely to be those in less critical need, whereas, in research, assignment to treatment and nontreatment groups must be random. Assigning the most critical cases to the treatment group would bias the study’s outcome, yet assigning participants randomly may be perceived as putting research needs ahead of clients’ needs.

The Milgram Studies

Concern about potential abuse of research participants arose in the 1960s in response to publicity following a series of studies by Stanley Milgram at Yale University. Milgram was interested in finding out how physicians who had devoted their lives to helping people were so easily able to hurt and even kill others in experiments in Nazi concentration camps.

In Milgram’s now-famous experiment, each participant was paired with one of Milgram’s colleagues but was told that this partner was another volunteer. Then, each participant, both real and pretend, drew a slip of paper assigning them to the role of either “teacher” or “learner.” Actually, both slips always said “teacher,” but the assistants pretended that theirs said “learner;” this way, the real participants were always assigned the role of teacher. Milgram then showed participants an apparatus that supposedly delivered shocks. Teachers were placed on one side of a partition and were instructed to deliver a shock to the learner on the other side whenever the learner made a mistake on a word-pairing task. The apparatus actually did not deliver shocks, but the learners pretended that it did, and as the experiment continued and the teachers were instructed to give larger and larger shocks, the learners gave more and more extreme responses. At a certain point, the learners started pounding on the partition, demanding to be released. Eventually, they feigned a heart attack.

When Milgram designed this study, he asked psychiatrists and psychologists what percentage of people they thought would continue as teachers in this experiment. The typical response was about 0.1 percent. However, Milgram found that two-thirds of the participants continued to deliver shocks to the learner even after the learner had apparently collapsed. The participants were clearly upset and repeatedly expressed concern that someone should check on the learner. Milgram simply replied that although the shocks were painful, they would not cause permanent damage, and the teacher should continue. Despite their concern and distress, most participants obeyed.

Milgram’s results revealed much about the power of authority. Participants obeyed Milgram as the authority figure, even against their own moral judgment. These results help explain the abominable behavior of Nazi physicians, as well as other acts of violence committed by normal people who were simply doing what they were told. Ironically, although Milgram’s study was so valuable, he was accused of abusing his own participants by “forcing” them to continue the experiment even when they were clearly upset. Critics also claimed that Milgram’s study might have permanently damaged his participants’ self-esteem. Although interviews with the participants showed that this was not true—they generally reported learning much about themselves and human nature—media discussions and reenactments of the study led the public to believe that many of Milgram’s participants had been permanently harmed. Thus began the discussion of experimental ethics that ultimately led to the modern system of regulation.

Bibliography

Boyce, Nell. “Knowing Their Own Minds.” New Scientist, vol. 20, June 1998, pp. 20–21.

Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 6th ed. Sage, 2022.

"Ethical Principles of Psychologists and Code of Conduct." American Psychological Association, www.apa.org/ethics/code. Accessed 1 Oct. 2024.

Garner, Mark, Claire Wagner, and Barbara Kawulich, eds. Teaching Research Methods in the Social Sciences. Ashgate, 2009.

"Guiding Principles for Ethical Research." National Institutes of Health, www.nih.gov/health-information/nih-clinical-research-trials-you/guiding-principles-ethical-research. Accessed 1 Oct. 2024.

Penslar, Robin Levin, ed. Research Ethics: Cases and Materials. Indiana University Press, 1995.

Perry, Gina. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. Rev. ed. Scribe, 2013.

Ritter, Frank E., et al. Running Behavioral Studies with Human Participants: A Practical Guide. Sage, 2013.

Rothstein, Mark A., and Leslie E. Wolf. "National Research Act at 50: An Ethics Landmark in Need of an Update." HastingsCenter, 12 July 2024, www.thehastingscenter.org/national-research-act-at-50-it-launched-ethics-oversight-but-it-needs-an-update. Accessed 1 Oct. 2024.

Rothman, Kenneth J., and Karin B. Michels. “The Continuing Unethical Use of Placebo Controls.” New England Journal of Medicine, vol. 331, no. 6, 1994, pp. 394–98.

Sales, Bruce D., and Susan Folkman, eds. Ethics in Research with Human Participants. APA, 2000.

Sieber, Joan E., and Martin B. Tolich. Planning Ethically Responsible Research. 2nd ed. Sage, 2013.

Slife, Brent, ed. Taking Sides: Clashing Views on Psychological Issues. 20th ed. McGraw, 2018.