Concerning placebos and the “placebo effect,” there is a distinction that I have struggled to articulate, a distinction I have also noticed highly intelligent humans failing to make. I recently found an excellent explanation of the distinction in a paper questioning the meaning of recent “open-label placebo” trials, and thought it was worth a short piece explaining why it’s important.
Here is the distinction as the authors put it, with citations removed:
Before reviewing findings from OLP studies, it is crucial to clearly demarcate between two distinctive uses for the term placebo. First, is the usage of placebos in RCTs. Here the term is often understood to refer to a certain kind of ‘thing’ (eg, saline injections or sugar pills). Strictly speaking, this interpretation is incorrect: instead, placebos in RCTs ought to be conceived as methodological tools since their function is to duplicate the ‘noise’ associated with clinical trials including spontaneous remission, regression to the mean, Hawthorne effects and placebo effects. Properly understood, then, these types of placebos are deployed as controls that are specifically designed to evaluate the difference—if any—between a control group and a particular treatment under scrutiny. Ideally, in RCTs, controls should mimic the appearance and modality of the particular treatment or medical intervention under investigation. In contrast, placebos in clinical contexts are interventions that may be intentionally or unintentionally administered by practitioners either with the goal of placating patients and/or of eliciting placebo effects.Blease, C. R., Bernstein, M. H., & Locher, C. (2020). Open-label placebo clinical trials: is it the rationale, the interaction or the pill?. BMJ evidence-based medicine, 25(5), 159-165.
On the one hand, there is the use of placebos in randomized controlled trials, in which the point is to “duplicate the noise” that’s likely to exist in the treatment group. On the other hand, there are hypothesized “placebo effects” that may take the form of real healing, which is not at all the same.
For a specific example, just because antidepressant trials result in enormous placebo effects does not mean that depression responds to placebo in real life. The proper conclusion to the size of placebo effects in these trials is that the measurement of depression is extremely noisy, to put it in the most polite way.
While true “placebo effects” of the healing variety may exist, it’s worth engaging with these authors’ concerns over how that may be demonstrated, particularly in open-placebo design trials in which the hope is to pave the way toward ethical placebo treatment. The choice of control is particularly tricky; for example, as with antidepressant treatments, simply using “treatment as usual” or “wait list” as controls likely inflates apparent effects. True blinding requires a great deal of subtlety and effort in research design.
In summary: noise isn’t healing.
Now we can all pretend that we knew it all along and never mistook the one for the other!
2 thoughts on “An Important Misconception About Placebos”
Yes, this use of a placebo is very similar to reducing possibility of there being a confounding variable. (If the control and treatment group differ in some way that (a) affects the dependent variable and (b) isn’t the independent variable that you’re trying to investigate, then you have a confiunder)
It isn’t necessarily the same as genuine healing, because the confounder might affect your measurement without affecting the underlying thing you’re trying to investigate … it might be pure noise.
Having said that, I think we should be open to the possibility that there is real healing in the placebo effects for depression. The patients really are getting better … it’s just that the healing isn’t caused by the drug the experimenter is interested in, but instead is down to something else that happens in the experimental protocol.
I am very suspicious of the experimental results when CBT is used to treat PNES or chronic fatigue syndrome. At least one of the papers in this area had a technical corrigendum (basically, a retraction) issued by the journal. The pattern here is that patients improve on self report but not more objective measures such as frequency of seizures or whether they’re becoming more active as measured by monitoring devices. If the patients say they’re feeling better but are still having seizures at exactly the same frequency as they did before treatment, I’m inclined to think it’s probably noise in the self report measure.
See, for example:
“Corrigendum to “Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial” says,
“The revised highlights include a new statement, which reads: “There were no differences between interventions in primary outcomes at long-term follow up”. This is a substitute for: “Guided graded exercise self-help (GES) may lead to sustained improvement in fatigue.””
That is a really concerning retraction. The actual clinical trial data was that the treatment didn’t work (no differences in primary outcome) but the initial paper buried the fact that they found the treatment didn’t work and instead said “may lead to sustained improvement in fatigue” despite the authors having experimental evidence to the contrary.
This type of questionable research practise seems to be rife in that research area.