Ask Tosomitu About Tumblr Drama

For the purposes of this article I will be using Tumblr drama to mean roughly “publicly calling out some entity’s ethical transgression (and ensuing discussions)”.

Starting with the obvious: not all Tumblr drama is created equal. In determining whether a particular contribution to the Tumblrdramasphere is positive or negative, I am concerned primarily with three classes of affected people and two classes of effects.

The first class of affected people is you (“the speaker”). What will the consequences of speaking be? How will speaking make you feel, and how would different possible responses (including hate speech and harassment) make you feel? You may or may not have the most at stake, but in either case you are a human being and how you feel matters. Consider doing a back-of-the-envelope expected personal utility calculation. (But if you do, throw it away immediately. It’s not accurate.)

Also consider how speaking will change you. Acts become habits.

Continue reading “Ask Tosomitu About Tumblr Drama”

Truthseeking about controversial things (status: indecisive rambling)

Suppose you ask me a question that is fraught (politically, or for some other reason), such as “Does the minimum wage do more good than harm?” or “Could a computer be conscious?” or “Is Lockheed Martin stock going to go up?”

(Not that you should ask me any of those things.)

What makes any particular answer I might give “objective” or “honest”? Or alternatively, what makes the process by which I arrive at my answer “objective” or “honest”?

Here are some prima facie plausible answers:

  1. The answer is objective & honest if it is, in fact, justified & true (faithfulness to facts).
  2. The answer is objective & honest if it fairly presents all reliable points of view, weighted by their reliability and by consensus of expert opinion (NPOV).
  3. The answer is objective & honest if it was arrived at by my best effort as an epistemic agent, either eliminating or acknowledging personal biases, conflicts of interest etc., and steelmanning alternative positions (process objectivity).
    1. People who have read Hanson or Taleb may want to turn the “no conflicts of interest” part on its head and assert a need to make a bet or to have “skin in the game”; i.e., a real-world incentive aligned with truth-seeking.
    2. Some have suggested discussing updates on evidence, rather than discussing posteriors directly.
  4. All possible answers are necessarily factional. “Objectivity” is not a coherent goal, but the most honest answer simply presents my factional view and reasons for it, while neither attempting to conceal which faction I belong to or the existence of other factional views, nor making special efforts to do them justice (factionalism a la Moldbug).
  5. The answer itself is the wrong level of analysis; you would be better off scrutinizing whether the answerer is an epistemically virtuous and responsible person (virtue epistemology).

I’m not really thrilled with any of these, but I don’t have a great alternative to offer up. (1) is charmingly simple, but too outcome-oriented; I don’t want to condemn a wrong answer that’s due to bad epistemic luck. (2) is exploded by the need to cash out “reliable” and “expert” and “consensus” in ways that aren’t blatantly factional. (3) is the position I am most attracted to (is that too obvious?). My problem with it is not its unattainability – after all, this definition is only meant to be an ideal to aim at. Rather, I fear that each additional attempt to eliminate bias represents another free variable for Rationalization to play with, in service of the Bottom Line. (4) seems unattractively defeatist or self-serving, and “denies the phenomenon” – I can remember occasions on which I believe people were genuinely objective and honest in their presentation of evidence/beliefs to me, although it’s hard to put my finger on what convinced me of this. (5) comes in second place, but I’m skeptical that people are reliably epistemically responsible from day to day or across domains.

What do you think? I suppose defining objectivity was just a jumping off point (perhaps an excessively abstract one); I’m more interested in the conditions under which you are willing to trust somebody’s statements on a controversial question.

The Last of the Monsters with Iron Teeth

In all species, the play of the young is practice for the essential survival tasks of the adults. Human children play at many things, but the most important is the play of culture. Out of sight of adults, children learn and practice the rhymes, rituals, and institutions of their own culture, distinct from that of adults.

The Western child today is mostly kept inside his own home, associating with other children only in highly structured, adult-supervised settings such as school and sports teams. It was not always so. Throughout history, bands of children gathered and roamed city streets and countrysides, forming their own societies each with its own customs, legal rules and procedures, parodies, politics, beliefs, and art. With their rhymes, songs, and symbols, they created and elaborated the meaning of their local landscape and culture, practicing for the adult work of the same nature. We are left with only remnants and echoes of a once-magnificent network of children’s cultures, capable of impressive feats of coordination.

Iona and Peter Opie conducted an immense study of the children’s cultures of the British Isles. The Lore and Language of Schoolchildren (1959) is comparable in richness to Walter Evans-Wenz’ The Fairy-Faith in Celtic Countries (1911) on the fairy faiths, or to Alan Lomax’ collections of American and European folk music.

Continue reading “The Last of the Monsters with Iron Teeth”

Socially Enforced Thought Boundaries

Sacred Boundaries of Thought

Boundaries are the forms that shape communities and civilizations. Without boundaries, communities cannot come into existence. These social boundaries are even mirrored in our cognition: common thoughts give us access to ordinary, related concepts, including common knowledge, but certain thoughts are relatively fenced off and rarely brought to mind.

Communities of all sizes are bound by their notions of the sacred. A violation of community sacredness – from casting aspersions on the rite of voting to mocking a family tradition – is experienced by community members as a kind of pain. This is one mechanism by which communities and their sacredness are maintained. People hurt each other, if “only” psychologically, when they violate each other’s sacredness.

A Christian who has a lot of social contact with non-Christians must build up a tough shell in order to withstand the (often accidental) violations to his sacredness that others are bound to make. It is much less painful for him to mainly interact with others who share his notions of the sacred. When a Christian and an atheist are good friends, however, each undertakes to feel the sacredness of the other, gaining the ability to be (vicariously) pained by its violation. They can speak easily on all topics without hurting each other, because each has learned to feel the other’s sacredness on his behalf. This is a precondition for intimacy.

Continue reading “Socially Enforced Thought Boundaries”

Accidental Trial by Fire

An Aeon piece by Dimitris Xygalatas has been making the rounds describing how hazing and other rituals involving sacrifice or pain, physical or psychological, often serves as a sort of prosocial glue that keeps groups together and functioning well. Xygalatas goes as far as to measure the heart rates of people involved in a fire walking ritual, and finds that he can predict how closely related two people are in their social network — e.g. spouse vs. close friend vs. stranger — just by patterns in their heart rates throughout the night. This insight helps explain vast swaths of social behavior, from fraternity hazing to why people go to concerts. Do read the whole piece for better context and more convincing evidence.

What struck me while reading the piece is how many of these prosocial rituals are almost accidental in our modern lives. We seem to be forming strong social bonds with strangers in a random, haphazard fashion rather an with the people we’re likely to have extended contact with — e.g. family or local community members. I can really only speak to my own experience here, so I started listing the various painful “rituals” I’ve participated in to get a feel how true my intuition was. I’ll list mine below, but I encourage you to list yours in the comments.

  • Enduring long (3 hours +) trips and spending too much money to play in Magic: the Gathering tournaments, often missing other important obligations as a result.
  • Enduring less long trips and spending even more money to play in paintball tournaments, often missing other important obligations as a result. Also, getting shot is physically painful and the most painful occasions are more likely to occur in these tournaments.
  • Graduate school – especially core courses and qualifying exams.
  • Concerts – it’s hot, smoky, hard to see, uncomfortable seating, often had to travel a long distance, etc.
  • Various outdoor activities (but only sometimes) – camping, fishing, etc. At their worst, it’s way too hot or way too cold, it’s unpleasantly wet, and there’s sometimes an element of danger. Sometimes they require quite a trip too.
  • A small underground boxing club in high school
  • High school itself. School itself.

What stands out to me about this list is the number of items that are or are attached to some hobby that requires time, money, and travel and aren’t typically things you do with your family and non-hobby friends – how many people are willing to travel for hours to get shot at with paint filled pellets or play a card game? But looking at the list, a smaller proportion of them have the accidental nature than I expected. Traveling and enduring pain for hobbies that few in your family/community participate in does seem like a stereotypically modern behavior. Is anyone aware of any evidence one way or another?

Another thing I noticed is how many are associated with school. School is traumatic in a lot of ways (raise your hand if you’ve ever had a nightmare about somehow making a huge mistake at school), and ever notice how so many people feel such a strong connection to their alma mater? Perhaps the modern social order is held together by school and hobbies.

The Old Evidence Problem

I’m in the middle of writing up a post sketching a some ideas I have about Bayesian inference in order to stir up a hornet nest – in particular to prod the hornet queen, David Chapman. In the process, I ran across this old blog post by Andrew Gelman discussing this (pdf) paper by Bandyopadhyay and Brittan criticizing one form of Bayesianism – in particular the form espoused by E.T. Jaynes. One of the issues they bring up is called the old evidence problem:

Perhaps the most celebrated case in the history of science in which old data have been used to construct and vindicate a new theory concerns Einstein. He used Mercury’s perihelion shift (M) to verify the general theory of relativity (GTR). The derivation of M is considered the strongest classical test for GTR. However, according to Clark Glymour’s old evidence problem, Bayesianism fails to explain why M is regarded as
evidence for GTR. For Einstein, Pr(M) = 1 because M was known to be an anomaly for Newton’s theory long before GTR came into being. But Einstein derived M from GTR; therefore, Pr(M|GTR) = 1. Glymour contends that given equation (1), the
conditional probability of GTR given M is therefore the same as the prior probability of GTR; hence, M cannot constitute evidence for GTR.

Oh man, do I have some thoughts on this problem. I think I even wrote a philosophy paper in undergrad that touched on it after reading Jaynes. But I’m going to refrain from commenting until after I finish the main post because I think the old evidence problem illustrates several points that I want to make. In the mean time, what do *you* think of the problem? Is there a solution? What do you think of the solution Bandyopadhyay and Brittan propose in their paper?

Edit: Here’s a general statement of the problem. Suppose we have some well know piece of evidence E. Everyone is aware of this evidence and there is no doubt, so P(E)=1. Next, suppose someone invents a new theory T that perfectly accounts for the evidence – it predicts is with 100% accuracy so that P(E|T)=1. Then by Bayes’ rule we have P(T|E)=P(E|T)P(T)/P(E) = P(T), so the posterior and prior are identical and the evidence doesn’t actually tell us anything about T.

Ketchup and Artistic Effectiveness

When we say art is “good” or “bad” what we mean is that it is “effective” or “ineffective” at things that are better or worse to be effective at.

This “effective at what” thing divides art up into species. Maybe phylums. It’s a bigger category than genre, because it doesn’t have anything to do with superficial content, it has to do with priorities and methods.

Continue reading “Ketchup and Artistic Effectiveness”

Considerations on reasoning by anecdote

This post is thinking out loud, catalyzed by some stuff Rev said on Twitter. Partially digested, so quality is low, but I think the topic is important.

Anecdotes are much-maligned as sources of reliable information. “Anecdote is not the plural of data!”

Perhaps too maligned. Anecdotes *from a trusted source* – that part is necessary – can often answer a hidden query much more effectively than systematically collected data.

For example, if I look up crime rates while deciding whether to move to a new city, implicitly I am asking “Will my probability of being victimized rise when I move to this city?” But the crime stats don’t directly tell you that. Maybe there are tons of violent incidents per capita, but basically all of them are gang-related and geographically isolated. Or maybe this city is reporting “incidents” and the other city is reporting “criminal charges”, or one city decided to use geometric means for some bizarre reason. Or one of 100 other such stories, many of which aren’t easy to rule out especially if you didn’t even think of them in the first place. Of course, in receiving the stats from some authority like law-enforcement or a social science researcher, it’s a huge leap to even trust the source to be honest in the first place!

On the other hand, consider a friend in the candidate city who tells you they’ve been mugged and they know two others who have. Not a huge sample, but one targeted to your reference class with much better precision than any systematically gathered data. There are sources of bias here too (maybe your friend has a habit of midnight strolls through Flea Bottom) but they are usually easier to notice and correct for. With systematically collected data, on the other hand, it is especially true that you don’t know what you don’t know.*

There are huge failure modes involved in reasoning via anecdote, which the biases & heuristics literature goes into in great detail. I will not recapitulate them, on the assumption that readers are familiar with them. The evidence type that is consistently worst is the filtered anecdote. This is the one that all your friends are sharing on Facebook. “Fundamentalist embroiled in bear-baiting scandal.” You are hearing about this because somebody doesn’t like fundamentalists, and for no other reason. Zero epistemic content.

I think I am fonder of reasoning by anecdote now because I’ve been burned quite a few times by data that turned out to be (a) lies, (b) filtered accidentally, (c) filtered deliberately, (d) misinterpreted. I am now a bit paranoid about the trustworthiness of authoritative sources of information, so I’m much more interested in the relevant experiences of trusted friends and family.

It occurs to me that this entire post is meant as a gentle criticism of the way I perceive previous-me and my audience to be likely to think, not to the way most people think. This has characteristics of a bravery debate.

So to avoid bravery debate framing, here is a table showing the tradeoffs.

Pros Cons

Anecdote

Relevance

Specificity

Trustworthiness easier to verify

Subject to suite of cognitive biases

Small sample ∴ weak evidence

Non-systematic

Data Systematic

Sometimes randomized to avoid bias

Large sample ∴ strong evidence

Questionable relevance

More difficult to interpret

Non-specificity

Trustworthiness harder to verify