Sacred Boundaries of Thought
Boundaries are the forms that shape communities and civilizations. Without boundaries, communities cannot come into existence. These social boundaries are even mirrored in our cognition: common thoughts give us access to ordinary, related concepts, including common knowledge, but certain thoughts are relatively fenced off and rarely brought to mind.
Communities of all sizes are bound by their notions of the sacred. A violation of community sacredness – from casting aspersions on the rite of voting to mocking a family tradition – is experienced by community members as a kind of pain. This is one mechanism by which communities and their sacredness are maintained. People hurt each other, if “only” psychologically, when they violate each other’s sacredness.
A Christian who has a lot of social contact with non-Christians must build up a tough shell in order to withstand the (often accidental) violations to his sacredness that others are bound to make. It is much less painful for him to mainly interact with others who share his notions of the sacred. When a Christian and an atheist are good friends, however, each undertakes to feel the sacredness of the other, gaining the ability to be (vicariously) pained by its violation. They can speak easily on all topics without hurting each other, because each has learned to feel the other’s sacredness on his behalf. This is a precondition for intimacy.
An Aeon piece by Dimitris Xygalatas has been making the rounds describing how hazing and other rituals involving sacrifice or pain, physical or psychological, often serves as a sort of prosocial glue that keeps groups together and functioning well. Xygalatas goes as far as to measure the heart rates of people involved in a fire walking ritual, and finds that he can predict how closely related two people are in their social network — e.g. spouse vs. close friend vs. stranger — just by patterns in their heart rates throughout the night. This insight helps explain vast swaths of social behavior, from fraternity hazing to why people go to concerts. Do read the whole piece for better context and more convincing evidence.
What struck me while reading the piece is how many of these prosocial rituals are almost accidental in our modern lives. We seem to be forming strong social bonds with strangers in a random, haphazard fashion rather an with the people we’re likely to have extended contact with — e.g. family or local community members. I can really only speak to my own experience here, so I started listing the various painful “rituals” I’ve participated in to get a feel how true my intuition was. I’ll list mine below, but I encourage you to list yours in the comments.
- Enduring long (3 hours +) trips and spending too much money to play in Magic: the Gathering tournaments, often missing other important obligations as a result.
- Enduring less long trips and spending even more money to play in paintball tournaments, often missing other important obligations as a result. Also, getting shot is physically painful and the most painful occasions are more likely to occur in these tournaments.
- Graduate school – especially core courses and qualifying exams.
- Concerts – it’s hot, smoky, hard to see, uncomfortable seating, often had to travel a long distance, etc.
- Various outdoor activities (but only sometimes) – camping, fishing, etc. At their worst, it’s way too hot or way too cold, it’s unpleasantly wet, and there’s sometimes an element of danger. Sometimes they require quite a trip too.
- A small underground boxing club in high school
- High school itself. School itself.
What stands out to me about this list is the number of items that are or are attached to some hobby that requires time, money, and travel and aren’t typically things you do with your family and non-hobby friends – how many people are willing to travel for hours to get shot at with paint filled pellets or play a card game? But looking at the list, a smaller proportion of them have the accidental nature than I expected. Traveling and enduring pain for hobbies that few in your family/community participate in does seem like a stereotypically modern behavior. Is anyone aware of any evidence one way or another?
Another thing I noticed is how many are associated with school. School is traumatic in a lot of ways (raise your hand if you’ve ever had a nightmare about somehow making a huge mistake at school), and ever notice how so many people feel such a strong connection to their alma mater? Perhaps the modern social order is held together by school and hobbies.
I’m in the middle of writing up a post sketching a some ideas I have about Bayesian inference in order to stir up a hornet nest – in particular to prod the hornet queen, David Chapman. In the process, I ran across this old blog post by Andrew Gelman discussing this (pdf) paper by Bandyopadhyay and Brittan criticizing one form of Bayesianism – in particular the form espoused by E.T. Jaynes. One of the issues they bring up is called the old evidence problem:
Perhaps the most celebrated case in the history of science in which old data have been used to construct and vindicate a new theory concerns Einstein. He used Mercury’s perihelion shift (M) to verify the general theory of relativity (GTR). The derivation of M is considered the strongest classical test for GTR. However, according to Clark Glymour’s old evidence problem, Bayesianism fails to explain why M is regarded as
evidence for GTR. For Einstein, Pr(M) = 1 because M was known to be an anomaly for Newton’s theory long before GTR came into being. But Einstein derived M from GTR; therefore, Pr(M|GTR) = 1. Glymour contends that given equation (1), the
conditional probability of GTR given M is therefore the same as the prior probability of GTR; hence, M cannot constitute evidence for GTR.
Oh man, do I have some thoughts on this problem. I think I even wrote a philosophy paper in undergrad that touched on it after reading Jaynes. But I’m going to refrain from commenting until after I finish the main post because I think the old evidence problem illustrates several points that I want to make. In the mean time, what do *you* think of the problem? Is there a solution? What do you think of the solution Bandyopadhyay and Brittan propose in their paper?
Edit: Here’s a general statement of the problem. Suppose we have some well know piece of evidence E. Everyone is aware of this evidence and there is no doubt, so P(E)=1. Next, suppose someone invents a new theory T that perfectly accounts for the evidence – it predicts is with 100% accuracy so that P(E|T)=1. Then by Bayes’ rule we have P(T|E)=P(E|T)P(T)/P(E) = P(T), so the posterior and prior are identical and the evidence doesn’t actually tell us anything about T.
When we say art is “good” or “bad” what we mean is that it is “effective” or “ineffective” at things that are better or worse to be effective at.
This “effective at what” thing divides art up into species. Maybe phylums. It’s a bigger category than genre, because it doesn’t have anything to do with superficial content, it has to do with priorities and methods.
This post is thinking out loud, catalyzed by some stuff Rev said on Twitter. Partially digested, so quality is low, but I think the topic is important.
Anecdotes are much-maligned as sources of reliable information. “Anecdote is not the plural of data!”
Perhaps too maligned. Anecdotes *from a trusted source* – that part is necessary – can often answer a hidden query much more effectively than systematically collected data.
For example, if I look up crime rates while deciding whether to move to a new city, implicitly I am asking “Will my probability of being victimized rise when I move to this city?” But the crime stats don’t directly tell you that. Maybe there are tons of violent incidents per capita, but basically all of them are gang-related and geographically isolated. Or maybe this city is reporting “incidents” and the other city is reporting “criminal charges”, or one city decided to use geometric means for some bizarre reason. Or one of 100 other such stories, many of which aren’t easy to rule out especially if you didn’t even think of them in the first place. Of course, in receiving the stats from some authority like law-enforcement or a social science researcher, it’s a huge leap to even trust the source to be honest in the first place!
On the other hand, consider a friend in the candidate city who tells you they’ve been mugged and they know two others who have. Not a huge sample, but one targeted to your reference class with much better precision than any systematically gathered data. There are sources of bias here too (maybe your friend has a habit of midnight strolls through Flea Bottom) but they are usually easier to notice and correct for. With systematically collected data, on the other hand, it is especially true that you don’t know what you don’t know.*
There are huge failure modes involved in reasoning via anecdote, which the biases & heuristics literature goes into in great detail. I will not recapitulate them, on the assumption that readers are familiar with them. The evidence type that is consistently worst is the filtered anecdote. This is the one that all your friends are sharing on Facebook. “Fundamentalist embroiled in bear-baiting scandal.” You are hearing about this because somebody doesn’t like fundamentalists, and for no other reason. Zero epistemic content.
I think I am fonder of reasoning by anecdote now because I’ve been burned quite a few times by data that turned out to be (a) lies, (b) filtered accidentally, (c) filtered deliberately, (d) misinterpreted. I am now a bit paranoid about the trustworthiness of authoritative sources of information, so I’m much more interested in the relevant experiences of trusted friends and family.
It occurs to me that this entire post is meant as a gentle criticism of the way I perceive previous-me and my audience to be likely to think, not to the way most people think. This has characteristics of a bravery debate.
So to avoid bravery debate framing, here is a table showing the tradeoffs.
Trustworthiness easier to verify
Subject to suite of cognitive biases
Small sample ∴ weak evidence
Sometimes randomized to avoid bias
Large sample ∴ strong evidence
More difficult to interpret
Trustworthiness harder to verify