Let’s be honest: probably half the problems we see in general medicine have some interaction between mind and body. When we use an evidence-based approach that considers data from randomized controlled trials featuring placebo therapies as a control, it behooves us to understand the role the placebo response has in patient care. Recent research has established that trial design has a more profound impact on the placebo response rate than individual factors such as patient age, sex, or baseline disease status. To that point, we’d like to highlight a recent meta-analysis of trials evaluating management options for irritable bowel syndrome (IBS) with a focus on the methodology rather than the results per se. While the analysis focused on IBS management, the placebo response steals the spotlight and deserves discussion, even though this particular study will not be found in DynaMed.
The authors evaluated 73 randomized trials looking at a variety of interventions for IBS. Data were examined using the FDA and current Rome criteria as a gold standard for effectiveness, comparing pain and frequency of bowel movements at baseline with both a study drug and a placebo in each trial. The intervention and placebo effects were both quantified; if the intervention was better, the difference between the two numbers was defined as a therapeutic gain.
As expected from prior studies, the size of the placebo response had more to do with study methods than with the types of patients enrolled. For example, using the relatively stricter FDA criteria dropped the overall placebo response rate from 27.3% to 17.9%. Odds were that participants in parallel studies were more likely to experience a placebo response than those in crossover studies (odds ratio [OR] 2.22, 95% CI 1.23-4.01) and patients participating in trials with a short run-in time (< 2 weeks) had significantly greater odds of experiencing a placebo response than those in trials with a run-in time > 2 weeks (OR 2.19, 95% CI 1.47-3.26).
Why should we care? This study prompts us to question some basic assumptions we make about the role of placebos in research. We often focus on the absolute difference between the intervention and the placebo (therapeutic gain). After all, this is how we calculate the number needed to treat or harm, which allows us to advise patients on what interventions will improve their outcomes compared to doing nothing. Study designers try to maximize potential therapeutic gain and minimize the placebo effect as much as possible. One common method is to use a run-in period before the real trial begins and consider excluding people who experience the placebo response during that period so that precious resources can be used for a detailed assessment of therapeutic gain. The authors of this study don’t recommend excluding placebo responders but they do recommend at least a 2-week run-in to minimize it. This makes sense—we don’t want to recommend therapies that don’t offer benefit to patients over time or attention or whatever other placebo effect we may see. Minimizing the placebo response is a good way to avoid type 1 errors, which involve falsely concluding that an intervention works when it doesn’t. But the other side of the coin - the side we have to remember when we minimize the placebo response in our trial design - is that artificially minimizing the impact of placebos makes it more likely we will make a type 2 error and miss an intervention that is actually beneficial even if part of the benefit comes from the mysterious placebo response.
DynaMed EBM Focus Editorial Team
This EBM Focus was written by Dan Randall, MD, Deputy Editor for Internal Medicine at DynaMed. Edited by Alan Ehrlich, MD, Executive Editor at DynaMed and Associate Professor in Family Medicine at the University of Massachusetts Medical School, Carina Brown, MD, Assistant Professor at Cone Health Family Medicine Residency, and Katharine DeGeorge, MD, MS, Associate Professor of Family Medicine at the University of Virginia and Clinical Editor at DynaMed.