Against Automaticity

An explanation of why tricks like priming, nudge, the placebo effect, social contagion, the “emotional inception” model of advertising, most “cognitive biases,” and any field with “behavioral” in its name are not real 

by a literal banana

Nothing Works That Way

Back in 2014, Kevin Simler published a provocative essay on the nature of advertising called “Ads Don’t Work That Way.” I imagine most people reading this have read it, and if not they really should read it in its entirety, but I’ll summarize as a reminder. Some advertising works in really boring ways, basically reminding potential customers that the product exists. Simler gives the example of drain cleaner; I think this is also how most fast food advertising operates. But there is also a more psychological theory of advertising, which Simler calls “emotional inception,” in which advertisers create a Pavlovian association between their products and positive emotions or other desirable attributes. In the inception theory, our little brains are extremely malleable and vulnerable to such associations, and we can’t help but associate soda with happiness, sweetened fruit snacks with being a good parent, etc.

Simler does not think this is how advertising works. His theory, which he calls “cultural imprinting,” ascribes more rationality and less pathetic malleability to our interactions with advertisements. In this model, brands create cultural messages with their advertising dollars, so that buyers know what message they can expect to be sending when they purchase products that are consumed socially. One of his major examples is beer branding, and he predicts the Bud Light advertising debacle of 2023 with incredible accuracy with one bit flipped:

If I’m going to bring Corona to a party or backyard barbecue, I need to feel confident that the message I intend to send (based on my own understanding of Corona’s cultural image) is the message that will be received. Maybe I’m comfortable associating myself with a beach-vibes beer. But if I’m worried that everyone else has been watching different ads (“Corona: a beer for Christians”), then I’ll be a lot more skittish about my purchase.

For me, this was an introduction to a new way of thinking: perhaps those phenomena that popular media and official science explain with mysterious psychological effects on weak brains – automaticity – might actually be better explained by rational processes. The rationalist community centered on LessWrong, which was an important influence on my thinking, often focused on cognitive biases, taking the work of Daniel Kahneman and even priming studies seriously as evidence for the structures of human reasoning. To their credit, these associations do not seem to have been edited out of their corpus since the replication crisis in social sciences began to demolish the automaticity literature. An important motivation of the rationalist movement, as I saw it, was that we were all very irrational beings, and had to struggle to become more rational. My argument in this essay is that we are actually very rational, but managed to convince ourselves, for a variety of (perfectly rational) reasons using a variety of tactics, that we were helpless idiots.

Ego Depletion, or Thinking Fast and Slow, The Foundation of Priming

I take the term “automaticity” from the priming researcher John Bargh, famous for “proving” that simply solving word scrambles with elderly-related words like “wrinkle” or “Florida” caused undergraduates to walk as slowly as old people. (Fewer people know that he also “proved” that the same primes cause people to be more forgetful!) 

Let’s briefly look at what Daniel Kahneman had to say about priming research in his book Thinking Fast and Slow:

When I describe priming studies to audiences, the reaction is often disbelief. This is not a surprise: System 2 believes that it is in charge and that it knows the reasons for its choices. Questions are probably cropping up in your mind as well: How is it possible for such trivial manipulations of the context to have such large effects? Do these experiments demonstrate that we are completely at the mercy of whatever primes the environment provides at any moment? Of course not. The effects of the primes are robust but not necessarily large. Among a hundred voters, only a few whose initial preferences were uncertain will vote differently about a school issue if their precinct is located in a school rather than in a church—but a few percent could tip an election. 

The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you. If you had been exposed to a screen saver of floating dollar bills, you too would likely have picked up fewer pencils to help a clumsy stranger. You do not believe that these results apply to you because they correspond to nothing in your subjective experience. But your subjective experience consists largely of the story that your System 2 tells itself about what is going on. Priming phenomena arise in System 1, and you have no conscious access to them.

He was also a believer, at the time, of another important phenomenon, “ego depletion,” which, as I will explain, is the foundation for priming-style automaticity and in fact most of the phenomena that I label as examples of automaticity:

A series of surprising experiments by the psychologist Roy Baumeister and his colleagues has shown conclusively that all variants of voluntary effort—cognitive, emotional, or physical—draw at least partly on a shared pool of mental energy. Their experiments involve successive rather than simultaneous tasks. 

Baumeister’s group has repeatedly found that an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around. The phenomenon has been named ego depletion. In a typical demonstration, participants who are instructed to stifle their emotional reaction to an emotionally charged film will later perform poorly on a test of physical stamina—how long they can maintain a strong grip on a dynamometer in spite of increasing discomfort. The emotional effort in the first phase of the experiment reduces the ability to withstand the pain of sustained muscle contraction, and ego-depleted people therefore succumb more quickly to the urge to quit. In another experiment, people are first depleted by a task in which they eat virtuous foods such as radishes and celery while resisting the temptation to indulge in chocolate and rich cookies. Later, these people will give up earlier than normal when faced with a difficult cognitive task.

Ego depletion is an embarrassing moment for science: hundreds of experiments “replicated” an essentially fake phenomenon, and it took the organized efforts of preregistered ManyLabs-style replication police to put it to rest. (I should mention that one of several large replication attempts led by a pro-ego-depletion researcher managed to find a small effect, but this needs to be weighed against the other large replication efforts, the most recent led by another pro-ego-depletion researcher, Kathleen Vohs, that failed to find any such effect, and against the general silliness of the project.)

But why did they fight so hard for ego depletion? It’s because ego depletion is the foundation upon which priming and other automaticity effects rest, and if it falls, they have nowhere to stand, as outlined by Bargh:

Tice and Baumeister concluded after their series of eight [ego depletion] experiments that because even minor acts of self-control, such as making a simple choice, use up this limited self-regulatory resource, such conscious acts of self-regulation can occur only rarely in the course of one’s day. Even as they were defending the importance of the conscious self for guiding behavior, Baumeister et al. (1998, p. 1252; also Baumeister & Sommer, 1997) concluded it plays a causal role only 5% or so of the time. 

Given one’s understandable desire to believe in free will and self-determination, it may be hard to bear that most of daily life is driven by automatic, nonconscious mental processes-but it appears impossible, from these findings, that conscious control could be up to the job. As Sherlock Holmes was fond of telling Dr. Watson, when one eliminates the impossible, whatever remains-however improbable-must be the truth.

For priming to exist as an important phenomenon, we must spend most of our time basically unconscious, or in “System 1” as Kahneman puts it, mere puppets of our environment, until some rare challenge forces us to wake up briefly to deal with it, so that we may go back to sleep.

Bargh and Kahneman are adamant that we can’t trust our experience, because our experience by necessity excludes the 95% of the time we spend as unconscious automatons, and that we must trust science instead. This is similar to the promotion of the “emotional inception” theory of advertising, which remains popular in marketing science, such as it is.

Here’s a bit of such science from Shiv, Carmon, and the legendary Dan Ariely, for flavor, which unites priming, “emotional inception” marketing, and the placebo effect, which will be the the subject of the next section:

In this experiment, participants first consumed SoBe Adrenaline Rush (a drink that claims to help increase mental acuity on its package) and then solved a series of puzzles.

Remember SoBe? The researchers told the subjects that they would be drinking this mind-improving beverage, and then had the gall to give them a form explaining that they were charging the subjects’ university accounts for the privilege of drinking it. Half of the subjects were charged the regular $1.89 price; half were told that they were only charged 89 cents, explaining it had been purchased with an institutional discount. What these experimenters “found,” was that if they had the subjects rate their “expectancies” for how much the SoBe would improve their mental functioning, thereby incepting such expectancies, the subjects who got it at a discount solved only 5.8 word jumbles, compared to 9.9 in the full price group. That is, simply drinking discount energy drink caused these poor subjects placebo brain damage to a fairly significant degree.

The example they give of a word jumble is “TUPPIL, the solution for which is PULPIT.” They allowed subjects 30 minutes to solve as many puzzles as possible; as the best group mean performance was about 10, they were taking about three minutes per word at the fastest. When I’ve looked up six-letter word jumble puzzles in the literature, they seem to allow about 20 seconds for subjects to solve them, but perhaps these were unusually hard puzzles. 

That is how priming is supposed to work: we are automatons going around in our sleep, and our performance on a simple puzzle can take a major hit simply by being informed that our drink was bought at a discount. We are infinitely vulnerable to our environment, to suggestion, to parlor tricks, that we can experience a major loss of intellectual ability, walking speed, memory, just by exposure to some infinitely subtle stimulus. 

From an evolutionary perspective, it seems like a bad design. You could be out there hunting with your elderly father, and suddenly you look at him and start walking slower and can’t remember what you were doing. You’re at your job and see an acronym BIB and start crawling around on the floor. You’re at the grocery store and you buy ice cream not because it is delicious and enjoyable but because the packaging primed warm emotional feelings and you subconsciously need to cool down. You find out that your prescription Adderall was bought with an insurance discount and suddenly lose 40% of your mental capacity. 

It’s not just that the relevant science is fabricated, or p-hacked, or uses meaningless measures or flexible measures, although the prevalence of such things does cast doubt on their evidentiary value. It’s that we should have been more skeptical from the start. The automaticity hypothesis is just as woo as spoon bending and precognition, and we should demand as extraordinary evidence for automaticity as we learned to demand regarding the others. See, e.g., Daryl Bem’s 2011 paper purporting to find evidence of clairvoyance, which may have played a role in instigating the replication crisis, and the history of the scientific investigation of spoon bending.

The spoon benders actually had to make a show of bending a spoon. Regarding automaticity, all we seem to ask is that someone wrote a paper claiming they bent a spoon. 

Pre-Post-Erous!

Most people take the placebo effect, as “demonstrated” in the SoBe study, for granted. We “know” that placebo pills heal people; maybe we even “know” that a placebo can still heal if it is openly labeled as such, or that more expensive placebos are more effective. If the placebo effect were not real, why would large medical trials of new drugs have to randomize subjects to a placebo condition? And why would they find big improvements in the placebo condition?

I will argue that we should put healing placebos into the “automaticity = woo” mental bucket. Placebos don’t work that way either. 

The first piece of the puzzle is how placebos actually function. Placebos have a perfectly valid job in randomized controlled trials: they are an attempt at mimicking the “noise,” or natural variation, from every aspect of the treatment process other than that believed to be efficacious, including time. Blease et al. explain the distinction: 

Before reviewing findings from OLP studies, it is crucial to clearly demarcate between two distinctive uses for the term placebo. First, is the usage of placebos in RCTs. Here the term is often understood to refer to a certain kind of ‘thing’ (eg, saline injections or sugar pills). Strictly speaking, this interpretation is incorrect: instead, placebos in RCTs ought to be conceived as methodological tools since their function is to duplicate the ‘noise’ associated with clinical trials including spontaneous remission, regression to the mean, Hawthorne effects and placebo effects. Properly understood, then, these types of placebos are deployed as controls that are specifically designed to evaluate the difference—if any—between a control group and a particular treatment under scrutiny. Ideally, in RCTs, controls should mimic the appearance and modality of the particular treatment or medical intervention under investigation. In contrast, placebos in clinical contexts are interventions that may be intentionally or unintentionally administered by practitioners either with the goal of placating patients and/or of eliciting placebo effects.

Many conditions are episodic or vary in severity, and starting from a bad time, the problem will often get better after a while on its own. When people talk about “regression to the mean” as an explanation for the placebo effect, this is what they mean. It’s mostly the “pre-post” comparison, as hinted in my title for this section. This effect can be enhanced by something called “eligibility creep” in dermatology: in an effort to include more subjects, researchers may exaggerate potential subjects’ condition at the outset of the study, so that an accurate measurement without exaggeration at the end will show a spurious improvement.

The second piece of the puzzle is that placebo effects compared to no treatment are small. Not just “small” in the meaningless sense of effect size, but too small to be noticeable or to make any clinical difference. For pain, a large meta-analysis found the mean placebo effect to be about 3.2 points on a 100-point scale, too small to matter. Similarly, the most recent meta-analysis on the placebo effect in depression found similar results, an effect size of .22, which is smaller than the effect of antidepressants over placebo, about .3, which translates into about 1.7 points on a 52-point scale and is too small to be clinically relevant. This is the case even though subjects in “no treatment” groups may have an incentive to exaggerate their symptoms in order to receive treatment, whereas those on placebo believe they are receiving treatment and have no such incentive. Still, effects are tiny, so tiny as to be meaningless in real life, and definitely tiny enough to turn out to be nothing at all with better methods.

The third piece of the puzzle is that we can’t trust studies that claim to find large placebo effects, for the same reason that we can’t trust priming studies. Here’s a figure from a paper by Waber, Shiv (from the SoBe paper), Carmon (same), and Dan Ariely, demonstrating a massive effect of more expensive placebos on pain from electric shock (both are placebos, the dark one is the normal price and the light one is discount):

We see effects in the “regular price” condition as high as 30 points on a 100-point scale, and often in the 20s, several times higher than the 3.2 point effect on pain we saw earlier, and certainly clinically significant. This level of pain relief would really matter. I was able to find a more recent study using similar methodology, though not labeling itself a replication, that found placebo effects of zero, one, and three points on a one hundred point scale in similar conditions, respectively. These seem much more realistic, and when combined with the meta-analysis results, indicates that the study that got a huge result is a major outlier. 

I do not think, contra Kahneman, that we “have no choice” to accept that such effects are real. Even without knowing about all the irregularities of this particular experiment (such as lack of IRB approval for shocking human subjects and deceiving them about pain medication), we should recognize how extraordinary these claims are, and demand appropriate evidence for them, not just one team’s word that such experiments even took place. 

Enter the Nudgelords

Nudge is just priming.

If there are stylized sculptures of waifish humanoids in a room, subjects will eat four fewer blueberries. If you sign at the top, you’re more honest than if you sign at the bottom. It’s priming, as a service.

Nudge studies aren’t real and don’t replicate. When they’re attempted in the real world, the effects are much smaller than in the academic studies, and the effective so-called nudges tend to share a curious feature: they operate on rationality rather than automaticity. For instance, one of the most effective “nudges” is apparently for the government to send clearly-worded letters explaining what they want people to do. That seems more like common sense and respect than a “nudge” to me, but I’m not a Nudge Professional. 

“Behavioral” anything (economics, finance, etc.) tends to reduce to automaticity explanations. 

I won’t spend much time on Nudge because it’s literally the same thing as priming – the same model of reality, the same experiments, often the same researchers, just using a different word. 

Cognitive Bias Parlor Tricks

I can’t go over every cognitive bias individually in this format, but I will give a basic pattern of how I think “cognitive biases” are produced as academic products:

First, an experiment or test is devised that people perform “poorly” on in some specific manner or dimension. It could be a list of choices of lotteries with different payoff structures (see e.g. this investigation of problems with a celebrated Kahneman mathematical model and associated experiments), or the famous demonstration of the “endowment effect” many of us experienced firsthand, in which we are given a unique little gift, like a mug, and then shamed for not being “rational” enough about its value when invited to trade it for somebody else’s gift. 

Second, this little experiment, and variants of it, are generalized to the entirety of human behavior. Perhaps a study on “conversation” involves strangers chatting in a laboratory about the answers to trivia questions, and the researchers find that this has no effect on accuracy. This may be generalized to a proposition like “conversations serve no purpose” in general, perhaps a “conversation bias” can be introduced, and people can feel smug for not liking meetings at work. 

Thus, we have scientific confirmation that a “bias” exists. This is often confusing, because, for example, if the “endowment effect” obtained under normal circumstances, markets would grind to a halt and not be able to function, because everyone would value what they already own more than anyone who didn’t own it. 

One solution to this kind of problem, both in cognitive bias and priming research, is to say that you found a “boundary condition” instead of that you failed to find the effect. “Boundary conditions” are a similar kind of cope to “mediator” or “subgroup” analysis, especially in small  studies not powered to detect them. 

Overall, I think rationality is a better starting assumption for human behavior, and we should demand a great deal of evidence for an important, widespread departure from rationality. 

Which of your favorite biases survived the replication crisis, and do they generalize?

Can You Catch Stupid?

“Social contagion” is the idea that behaviors spread through human groups like infections. Trivially, the spread of technologies like hybrid corn or mobile phones can be modeled like epidemics. The only “automaticity” aspect is the tendency to take the metaphor seriously, as if people were really affected by “social contagions” in the same unconscious, unfree way as by germs. 

Some phenomena that have been proposed to be socially contagious are suicide, obesity, quitting smoking, and (as a bit) acne, headaches, and height. The latter three are a bit in the sense that they use similar methodologies as the social contagion literature to demonstrate that homophily, the tendency for similar people to cluster in social groups, accounts for most, if not all, of the supposed “social contagion.” A more advanced method is to use time lags, as if time-lagged obesity weren’t as much a factor of homophily as snapshot-in-time obesity. 

However, researchers essentially never try to distinguish germ-type “contagion” from social learning. I think our starting hypothesis should be that behaviors that spread in the population arise from social learning, rather than from a mysterious unconscious process of thoughtless copying. Human copying is anything but thoughtless. Copying is an important form of creativity. Nonhuman animals can copy but a tiny fraction of our behaviors, even if we sometimes like to pretend they are capable of stealing our cool cottagecore aesthetics by dressing them up in sick outfits. 

People share and copy, but social contagion doesn’t work that way.

The Clockwork Universe With Clockwork People

An extended excerpt from the phenomenologist Gian-Carlo Rota, better known in mathematics for his work in combinatorics, on the fading myth of our time:

The theory of myths asserts that every civilization is ultimately characterized by the series of myths it believes in. These “working myths” are myths that that civilization is not aware of at all. They are not verbalized. The moment such a myth is out into words, it’s no longer something that people authentically believe in. It can now become a subject of discussion. So, as time goes by, a given myth that was universally believed in suddenly cracks and becomes doubted. At such a moment, the members of that civilization split into opposing camps: those who say “Yes, it’s so” and those who say “No, it’s nonsense.” And then the myth will fade, and finally it’s viewed as untenable except by a small group of people, who keep it as a superstitious belief.

….What I’m working towards is that not too long ago, there was a particular myth, universally believed, that in our time is being verbalized, which is the first stage towards its fading. A lot of phenomenology is a discussion and critique of that particular myth. 

What is that myth? It’s the myth of the clockwork, the myth of mechanism. It’s the idea that you can explain every phenomenon causally, by finding an underlying mechanism. It has been strongly believed in for a few hundred years. It had its heyday in the nineteenth century. It’s very simple: understand how the wheels work and you’ll understand everything. This went on and on, and that’s the myth that’s cracking in our time. I’m not in any way implying that mechanisms are bad, or that there are no mechanistic explanations. Certain phenomena can certainly be explained mechanistically. But there are other phenomena that do not have such an explanation. When you say this nowadays, tempers run high. Someone will point a finger at you and say, “You are an irrationalist! Either you believe in mechanism, or else you are irrationalist! Either a marble is red or it’s blue! Nothing in between!” But the phenomenological answer is that it is in no way irrational to deny a mechanistic explanation. We are not denying other modes of explanation which are logical and coherent, but just not mechanistic.

Those who will accuse us of irrationalism have an extremely narrow view of rationalism, a view that scientists have abandoned a long time ago. Ever since quantum mechanics came to be, and quantum mechanics is probably the greatest scientific idea of this century (n.b. the 20th -ed.), you can kiss goodbye to causal explanation. It’s very unfortunate that we still do not really understand quantum mechanics. Feynman used to stress that: quantum mechanics works, but there is something so mysterious about it, so completely different from anything we’ve thought about, that no one has succeeded in explaining it. But that works as our ally now. If someone tries to propose causal explanation as the scientific paradigm, we can say, “Phony baloney, science doesn’t work that way anymore! They gave that up decades ago.” Science is far more sophisticated. The strictly marble-like causal explanation is something that’s been thrown out of the window.

Science doesn’t work that way anymore, but few have gotten the message. Phenomenology proposes a different form of causality from the “marble-like” version, one based on conditions of possibility, a “Fundierung” relationship of things being founded on each other in the sense of allowing each other to come into being and be perceived as such. It is outside my scope to attempt an explanation at length, but this model of the world is not a woo model. It is a richer model, a more realistic model, and a model more in accord with careful observation of the world than the received marble-like model that has allowed us to accept so much silliness. 

The automaticity theories named here are holdovers from the fading myth of the clockwork universe, of clockwork people. The myth began to be named by the end of the 19th century (by William James at least, who also named the related Religion of Healthy-MIndedness that is still with us today). I hope by naming a more specific incarnation of the myth here that I can promote its thematization so that it can continue to fade. A science of ourselves cannot be established by dressing woo in lab coats, clipboards, and the mathematical ideas of an antique physics. 

I invite anyone to be the Lakatos to my Feyerabend, and present Here’s Why Automaticity Is Real Actually, as mine is an extreme case and does not pretend to be a measured, balanced examination of the subject. I would not recommend that anyone superstitious attempt this project, however, for obvious reasons: if priming were true, such an effort could prove lethal.

39 thoughts on “Against Automaticity”

  1. I enjoyed this scroll, thank you for publishing it.

    > 30 points on a 10-point scale
    Could there be a 0 missing here?
    

    Like

  2. A complaint:

    I don’t see the relevance of quantum mechanics. Brains (including neurons) are likely too large and noisy to exhibit quantum effects.

    For practical purposes, non-quantum uncertainty is much more relevant to brain modeling given the staggeringly large number of degrees of freedom, our very limited ability to observe even a few of them, and the fact that so many of these degrees of freedom vary between individuals.

    If you happen to be a Penrose enjoyer though, that consideration probably wouldn’t dissuade you.

    A caveat to my complaint:

    If the purpose of referring to QM is to highlight a changing scientific Zeitgeist (ie to do a cultural analysis of scientists), then it could be relevant in that limited way. But scientists who correctly understand QM will also not use it to adapt to the stance you finally propose: QM is, as I said originally, irrelevant to the issue of human behavior and brain dynamics, as far as I am aware.

    Liked by 1 person

    1. To add to this: I don’t think it’s strictly speaking impossible for QM effects to have an impact on brain behaviour. Yes, obviously you won’t get a whole human brain in some coherent superposition of states, but at the level of neurons firing, molecular reactions are taking places that do have some quantum effects involved (notably, any step in which a hydrogen atom – a proton – needs to change places probably has some non-trivial contribution of tunnelling), and one can imagine that a delay of even a few fractions of a nanosecond may eventually snowball into a more macroscopic effect.

      But that’s really the essence of it. You don’t even need to invoke QM for this: it’s just good old fashioned chaos. I honestly dislike that bit from Rota, which sounds like nonsense to me, as it somehow suggests there is a way for physical processes to be non-mechanistic. And I don’t think that’s true! I think the world probably is deterministic in the ways that matter, and that the brain does work like clockwork, and still that doesn’t mean automaticity is real any more than it means sympathetic magic is. “Causality exists” doesn’t mean “I can take a random cause and a random effect and connect them because symbolically they kind of make sense together”. Predicting and controlling someone’s behaviour might be doable if you could literally manipulate every atom in their brain, but obviously that’s not what we’re talking about here. Insofar as the information we have and the amount of input we can effectively manipulate matters, controlling other people’s behaviour might absolutely be out of our own power, the same way in which in theory you can imagine a Maxwell demon separating hot and cold molecules in a gas, but good luck actually doing that as a macroscopic human.

      Like

  3. and quantum mechanics is probably the greatest scientific idea of this century (n.b. the 20th -ed.), you can kiss goodbye to causal explanation.

    This is incorrect, at least in the context in which it’s used to refute the idea of deterministic causes. There are multiple deterministic interpretations of quantum mechanics, and they’re all clearer and free of mysticism precisely because they provide causal explanations, even though we are bounded in what we can know about the causes. A good article otherwise.

    Liked by 1 person

    1. Yes, it sounds really like typical quantum mysticism nonsense. You don’t really mean to invoke quantum mechanics to justify why such a complex system as the human brain might not respond to these specific and contrived stimuli in a certain way. It just doesn’t! It responds to plenty others. In fact, saying that humans are rational is saying they are in fact predictable, much more than if they started doing unexpected things in response to subliminal nudges.

      Like

      1. In fact, saying that humans are rational is saying they are in fact predictable

        I think this is an underappreciated point that I too have made when I’ve debated free will. To make choices free of antecedent causes is not the mark of a “free” will, but of no will at all!

        Like

        1. Well, only if one takes a “strong” definition of free will that equates it with an absurd counterfactual of “if you could rewind the universe back to a snapshot taken before Alice took a decision, and you let her take a decision again, she might take a different one by no apparent causal process” – which is no different from Alice just making a random choice! If instead one uses a weaker definition that fits within a deterministic world, in which Alice is one and the same with her body (and thus the configuration of her brain), and free will only means that she can make choices that are strongly determined by her own values, needs, and wants, which are unique to her as an individual, rather than being controlled by factors outside her, then a rational Alice has more free will than if she can be primed to do things that run counter her baseline desires.

          Like

  4. I’m a retired cognitive psychologist and historian of psychology. The idea of automaticity is older than the heuristics and biases literature, and it was proposed as the second half of a distinction between “controlled” and “automatic” processes. Controlled processes are those we can direct, and are mental resource hogs. A standard example is voluntary attention, such as listening to a conversation. You can switch from one conversation to another, but you can’t listen to two at once without switching back and forth and missing pieces of both. There is also involuntary attention, in which an external stimulus seizes your attention and you can’t choose not to hear it (e.g., a fire alarm). Leibniz distinguished these centuries ago, and attention and the conditions that affect its allocation go back to the first psych labs of the 19th century, and are vividly described by William James in Principles of Psychology. Automatic processes occur without conscious choice and are less resource-limited. You are using one now, reading (in literate adults in their native language). You have discussed social [automatic] priming, which cognitive psychologists were dubious about from the beginning, because of the related controversy about “subliminal” perception in the 1950s, in the neo-Freudian “New Look in Perception” movement. Cognitive automatic processing has replicated. For example, if we show subjects (I’m old) a series of words and measure how long it takes the subject to begin to say it (latency) we find (among other things) the following. If we show, for example, EXAMPLE at position 2, and then again at, say position 11, the latency at 11 is less than at position 1. If we show EXAMPLE in a different font at 11, the latency is less, but more than if the fonts match. The first presentation automatically primed recognition of the 2nd presentation. It’s important to bear in mind that these are descriptive labels for kinds of experience, not explanations. James could not decide between two classes of explanation, (1) that there is a stage in stimulus processing that consists of allocation of attention (the common-sense view he preferred, because it rests on choice) and and a causal view that drops the idea of attentional allocation altogether in favor of the gateway(s) to consciousness being diverse non-attentional processes such as arousal (e.g., a jump-scare in a horror movie), and his uncertainty is still with us.

    Liked by 1 person

    1. Yes, I don’t think all these concepts are bunk either – I can say I do experience very neatly the feeling of running out of energy for doing things intentionally (but then again, I might have ADHD, which probably makes that feeling more acute). It’s the really counterintuitive, weird reaching priming connections that make for good shocking headlines that are very dubious (and indeed often fail to replicate). “Forcing yourself to focus makes you tired” isn’t that.

      Like

  5. I agree the priming and nudge literature etc is bogus. But I wonder how emotions fit into your model. They are arguably a form of cognition, a rapid and heuristic form. And in many cases they are automatic, at least at first. I also think that casting humans as rational thinkers first and foremost brings up evolutionary problems, namely drawing too bright of a line between our thought processes and those of non-human animals.

    Like

  6. I was with you up until the last part. It’s too close to a bad old idea that goes like “quantum physics seems kinda mysterious (to me, a person who has never taken Quantum 101) therefore logic is out the window and I can believe whatever woo I like “. Cool-sounding Feynman quotes aside, there is nothing inherently un-mechanical about quantummechanics.

    Liked by 1 person

    1. I mean, you can reasonably make up some possible interpretations of QM which allow for true randomness to seep into the world (in some cases, like objective collapse interpretations, this would actually not be very mysterious or confusing at all). But then again of course that probably has nothing to do with why some psychological results p-cooked for maximum publishability are bogus.

      Like

  7. Apparently, some people who take SSRI’s experience effects relatively rapidly, like after a few hours instead of after a week or two. This is commonly alleged to be a placebo effect. I.e. the effect after weeks is considered to be partly non-placebo, but the fast effect entirely placebo.

    Would be a big placebo effect if true, but I am not sure if anyone has done the proper experiment to see if you still get the rapid effect with sugar pills. This would presumably need a fairly large sample size, as not everyone experiences the rapid effect.

    Like

  8. Broad comments:

    1) You are too dismissive of nudges. Nudges that rely on subliminal suggestion work poorly, but nudges that rely on people being lazy often work well. Examples that come to mind are default options (e.g., The Importance of Default Options for Retirement Saving Outcomes), first options (e.g., ballot order effects), and “dark patterns”.

    2) I think you draw too strong a line between the rational and the irrational. I can tell rational stories for just about everything. Related to 1), perhaps people rationally see the default option as endorsed. Perhaps the (effectively random) first row of the second column of a ballot is a natural Schelling point for voter coordination.

    But it’s clear to me a lot of these effects are at least partially due to some sort of bounded rationality or people being less than infinitely smart. If you ask people, they will often say just that. Moreover, it’s sort of obvious that people find reading contracts unpleasant and aren’t infinitely smart.

    I agree much of behavioural science is parlour tricks with limited practical relevance. Research focuses on where rational models are wrong, leading many to (incorrectly) assume that’s often the case. But humans being less than perfectly smart isn’t a parlour trick, it’s a pretty core feature of the economy: “No one ever went broke underestimating the intelligence of the American public.”

    3) Like other replies, I thought the quantum section was poorly linked and complexity theory was a much better grounding.

    4) I agree there is an important distinction between the clinical and research use of “placebo”, but I think you are too hard on the clinical effects of placebos. You can say being half as effective as antidepressants is “too small to be clinically relevant”, but that strikes me as crazy and in fact highlights the weakness of d values. (e.g. All Medications Are Insignificant In The Eyes Of God And Traditional Effect Size Criteria)

    Like

    1. Right about Nudges making sense from the point of view that people are lazy and/or busy. When Nudge Theory was introduced in 2008, it struck me as basically just what I was doing in 1986 when I was in charge of introducing personal computers where I worked. Back then, personal computer operating systems were extremely bare bones, so I quickly discovered that the difference between an employee getting 20% of the potential out of the computer and getting 80% was my team spending a few hours carefully setting them up to be optimized to be easily used.

      Liked by 1 person

  9. The discussion of the placebo effect here seems a bit out of place. I suppose it’s connected because the wibbly-wobbly Health Energy interpretation similarly links back to “If you’re exposed to the proper conditions you will magically do X, mechanism-free” thinking that automaticity also ties into, especially if you consider “get well” a behavior.

    But what strikes me as odd is that the tiny, clinically insignificant yet statistically significant magnitudes of placebo effects is exactly what you’d expect from the less mechanistic and less directly-causal explanations the post goes on to explain, isn’t it? If we’re giving up the universality of the mechanistic universe’s simple explanations, surely the universality of simple solutions needs to go too? So tiny, individually pointless effect sizes are exactly what you expect out of any study that targets any single intervention. And any genuine solutions are going to be built out of many interventions that each individually only slightly nudge outcomes. It’s disappointing that there’s no easy answer to most medical problems, but that’s the natural conclusion when you reject such simplicities as ego depletion or abstract health energy.

    Anyway, I guess where I’m going with this is that considering the placebo effect to be a real and in some sense useful effect seems compatible with rejection of the myth you’re attacking here. Sure, whatever positive bias is created by the noise of receiving any kind of treatment all, is too small to also be the last step in treatment. But as far as I can tell the evidence is that there is a very small effect, unless you start from the motivated reasoning perspective of assuming that any such evidence is flawed because it stands for there being an effect. Anyway, that makes it seem somehow different from the other phenomena you talk about where something gets hypothesized into existence from a combination of armchair reasoning along mechanistic lines and a failure of critical thinking.

    Like

  10. “From an evolutionary perspective, it seems like a bad design. You could be out there hunting with your elderly father, and suddenly you look at him and start walking slower and can’t remember what you were doing. You’re at your job and see an acronym BIB and start crawling around on the floor. You’re at the grocery store and you buy ice cream not because it is delicious and enjoyable but because the packaging primed warm emotional feelings and you subconsciously need to cool down. You find out that your prescription Adderall was bought with an insurance discount and suddenly lose 40% of your mental capacity.”

    What’s interesting to me is that Automaticity sounds like an exaggerated conception of OCD and ADHD: I suffer from them, and I am definitely prone to sudden, drastic shifts in my emotional state for relatively trivial reasons, and it can be debilitating; it’s probably something that would have been selected against in a Hobbesian State of Nature. That’s not to say that I don’t think priming, ego depletion, etc. aren’t bunk (I long ago came to the conclusion that advertising’s chief efficacy is in providing brand recognition, so I’m glad to see other people agree), but I do wonder if the researchers who believe in them arrived at their beliefs through a combination of the typical mind fallacy and intellectual arrogance (i.e. “I’m smart, so other, less smart people must have less resistance to the neuroses from which we surely all suffer”)

    Like

    1. Addendum: I should also mention that when I was younger I assumed that my mind was very malleable, and lived in constant fear that random things would e.g. make me less intelligent, and put a lot of effort into thinking the “right” way and internally reaffirming my intelligence, maturity, etc. In hindsight this was classic OCD, and again, not a million miles off how proponents of automaticity think brains work

      Liked by 1 person

      1. Further addendum: ” That’s not to say that I don’t think priming, ego depletion, etc. aren’t bunk ” should read ” That’s not to say that I think priming, ego depletion, etc. aren’t bunk”

        Like

  11. Yes, human beings have free will – and a genuinely humanist morality would teach us how to develop it to its full potential. But altruism teaches us to sacrifice our free will in favor of conformism and submission.

    Like

  12. The fact that “science doesn’t work like that anymore”, with regards to providing causal and mechanistic explanations, is actually a sign that modern science has degenerated. It’s not great advance or a new frontier, it is a regression to mysticism that has impeded real scientific progress. I write about this at length in my article on the scientific method: https://liminalrevolutions.substack.com/p/is-science-trustworthy-part-1-the

    Liked by 1 person

  13. I’m seeing a lot of confusion here between causation (NOS) and strict determinism here.

    I’m also seeing a lot of confusion between Complete Automatism the idea that people are just robots, and partial automatism, the idea that conscious rationality can be overridden sometimes.

    And I’m seeing a lot of confusion between conscious control in the psychological sense, and free will in the metaphysical sense. Lack of determinism doesn’t by itself yield any kind of conscious control, and conscious control is quite possible in a a deterministic universe.

    Like

  14. “People share and copy, but social contagion doesn’t work that way.”

    I don’t understand the distinction you’re making here. A recent example of something called ‘social contagion’ is groups of teenage girls suddenly exhibiting Tourette syndrome symptoms, despite their not suffering from Tourette syndrome. This seems to be connected to the popularity of young women on tiktok who pretend to have Tourette syndrome. So the students are suddenly copying one another’s pathological behavior.

    “Social contagion” seems like a fine metaphor to describe the spreading of “functional neurological disorders”. We can come up with rational reasons why someone might copy a pathological behavior—it gets them attention, etc—but at some point the word “rational” gets stretched to mean ‘anyone choosing anything’.

    See also: the essay in American Conservative “What ‘Long Covid’ Means”.

    Like

  15. Michael Tomasello calls humans “imitating machines” because of the ease with which we acquire new behavior by observing others. This is all, really, that is meant by “social contagion”, and it is as well established a phenomenon as exists in science. It doesn’t have to be equated with the spread of actual viruses or pathogens.

    Like

  16. On social contagion, it’s kind of interesting that the people who derived the bog standard diffusion-of-innovations model also provided a rich and well thought out qualitative understanding of the process but a lot of epigones just want to have mindless human marbles going ping! Perhaps it’s something to do with eeeuw! business school students getting Everett Rogers’ book as standard reading on marketing; got to maintain those guild boundaries.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.