Defensive Epistemology

Expecting that everybody should have an articulate opinion on the day’s pressing issues (“informed citizenship”) is pernicious. I state this without argument here; if you want it argued look at In Praise of Passivity by Michael Huemer.

Being convinced of this, for the past odd year I’ve been trying to implement the idea in my daily life, with mixed results. Now, part of the problem is simply that outrage porn is fun. But I think there is also a conceptual lacuna that makes it hard to articulate just what one is trying to do by “tuning out”, and why it is so difficult.

Let’s start with the first-person experience. Have you ever been in the position of arguing against an expert in the expert’s field? I suspect this happens to ordinary people most often in conversations with doctors, realtors, financial advisors, teachers/professors, salespeople, religious evangelists, and enthusiastically political relatives at Thanksgiving dinner.

(The latter is not so much a matter of expertise, it’s just that somebody who has pre-memorized talking points can usually carry an argument against somebody who hadn’t been anticipating one.)

I find the experience of somebody talking circles around me very unpleasant, and I don’t think I’m unique in this. Of course, it is even more distasteful if there is an audience in whose eyes you are losing status. The impulse I feel in such situations is to hunker down, avoid losing face, and lash out at the other speaker with some “gotcha” calculated to make them appear foolish. In the worst cases, it may not be possible to escape the conversation without making concessions, unless you are willing to stoop to some sort of emotional All-In bet, like a fit of righteous anger or crying.

This is one reason why the conscious project of not forming opinions is difficult: you may not be interested in the Topical Issues, but they are interested in you. The world will bombard you with claims about crime statistics, interest rate predictions, the Rights of Man, history, Gini coefficients, genetics, and sundry other things. More relevantly, it will tell you that stock A is a sure thing, or that diet B is sure to help your child’s brain development. It’s no good to be completely ignorant about these things; not only will you will lose face, but your interlocutor can make their argument and lead you to concede they’re right. (Another approach is to pretend to take pride in your ignorance, but my guess is that readers of this blog can’t pull that off easily.)

What is needed is a battery of defensive arguments and ideas. Their purpose is (a) to serve as sanity-checks on unfamiliar ideas, (b) to get irritating interlocutors off your back.

For example, an Efficient Markets heuristic can help you hold your own against realtors and financial advisors trying to pull the wool over your eyes. “Why isn’t the stuff you just mentioned priced into the stock already?” “If houses are cheaper in the winter, why aren’t millionaires loading up on houses in winter to sell them in summer, until the difference goes away?”

Another example is vigilance against selection effects. “You say this is a good school – but I bet it just takes in unusually good students.” (Tip: if you actually say “selection bias” here, it sounds very authoritative.)

Another example is a suite of basic game-theoretical and strategic ideas, like “if once you have paid him the Dane-Geld, you never get rid of the Dane”, and “people respond to incentives”.

Yet another is a general lack of faith in Interventions to change somebody’s life course; a heuristic of genetic determinism as a baseline prior. This doesn’t work so well as an *argument*, but simply as a prior, it’s helpful to know that the differences between, say, parenting styles don’t seem to lead to huge divergences in results, and that throwing lots of money at social problems more or less always has epsilon effect.

I invite readers to contribute to a list of other such ideas in the comments. Those above I chose because I have actually used them on more than a few occasions I can remember.

The key to this argument class is no requirement for detailed background knowledge. Ideally, they rely only on basic logic, or simple empirical laws. For example, I don’t actually know about the teaching quality of the school in question – all I know is the reason why that quality is hard to evaluate.

Note also that this is a basically negative project. Its emphasis is on checking others’ positive claims about the world, not creating new ones. Its goal is to make you antifragile against ideas, not to help you build a great edifice of theory.

So to the person who wishes to divest themselves of pointless opinions and refocus on near-mode stuff, I propose that if you go too far in that direction, you just make yourself exploitable. You need to practice the art of defensive epistemology, or risk being a sucker (or at least losing status). And that probably requires some engagement with the world of ideas, to train your discipline against actual enemies.

*You may notice that this sounds like a manifesto for the skeptic movement. I’m tempted to talk about why skeptics in practice are disappointing, but I will leave that discussion for now.


To be sure I’m understanding him, Gabriel is saying that one should cultivate impatience for busybody ethical “interventions” with ~0 expected benefit. Recycling (in the blue-boxes sense anyway) is a good example, but so are: unplugging phone chargers, endlessly haranguing smokers, posting “let’s take the stairs” signs on elevators, Raising Awareness, looking for satanic/sexist messages in rock songs and video games, etc.

I am sympathetic to Gabriel’s irritation with certain of these little “gestures”, such as the recycling ritual. But there is a thing or two to be said in defense of such rituals.

First, sometimes prosocial actions appear “low-leverage” because not many people have defected from the prosocial norm yet. (The first person to overgraze their sheep on the village green doesn’t see what the big deal is – there’s still loads of grass to spare. The first person to *fail to yell at* the first overgrazer, even less so.)

They may also appear low-leverage because practically everybody is already defecting from the prosocial norm. (There’s only a tuft of grass left; what difference does it make if my sheep finish it off?) In other words, rationalizations for defection are especially available at the beginning and the end of tragedies of the commons.

Second, prosociality rituals, even low-leverage ones, help maintain social capital. Conspicuous blue box recycling may be useless qua environmental intervention, but it signals to my neighbour that e.g. I am not the kind of person who will turn a blind eye when he fails to pick up his dog’s leavings. Social capital, supported by a huge edifice of cultural norms, is relatively invisible to the fish that swim in it every day, and like physical infrastructure it doesn’t disappear the very second its beneficiaries fail to maintain it. But disappear it eventually does, and the transition may be very swift indeed*.

It is also worth bearing in mind that norms can be a substitute for laws, usually operating in domains where the law would be too blunt an instrument. The latin formula “de minimis not curat lex” (the law does not deal in trifles) sums up this attitude. It’s not worth having a law requiring people to hold open doors for old ladies: it’s not important enough, it would cost too much in money and time, it would require all sorts of carefully specified exceptions (also it would reduce the signalling value of the behaviour, but I can’t decide whether that would be good or bad).

Yet holding doors for old ladies, while trifling, is one of many behaviours that by increments improve our social environment. Others include always giving the correct change even when you could cheat, washing regularly, thanking people, picking up your dog’s leavings, shovelling your sidewalk and maybe even your neighbour’s, and not making too much noise at night. Individually these things seem like trivialities, but failing at all of them adds up to misery by a thousand cuts. So if you want these trifles taken care of, but prefer not to have Sin Laws, get norming.

I said at the beginning that I was sympathetic to Gabriel’s point. I think the line between “Reusable Bags” in his pejorative sense and Social Capital Maintenance, which I want to boost a little, lies partly in who the audience is. When I take some low-leverage prosocial action, am I signalling at somebody whose own prosocial or antisocial actions affect my life (a neighbour, a friend, a member of my twitter circle, a family member), or am I degenerately signalling at some super-Dunbar audience of total strangers? These things feel very similar from the inside, but it is worth distinguishing them, because insufficient norming in real life leads to the soul-sucking anomie a lot of us live with, while norm enforcement chimp-outs are ruining the internet.

As usual, the opinions stated above are strongly stated, but loosely held.

* My crackpot theory is that cultural shifts in acceptable behaviour are often so swift because (trigger warning: handwaving) Common Knowledge about what social norms will be enforced easily collapses as soon as there is appreciable diversity-of-norms – look at a graph of e.g. divorce rates from 1960 to 1980.

Truthseeking about controversial things (status: indecisive rambling)

Suppose you ask me a question that is fraught (politically, or for some other reason), such as “Does the minimum wage do more good than harm?” or “Could a computer be conscious?” or “Is Lockheed Martin stock going to go up?”

(Not that you should ask me any of those things.)

What makes any particular answer I might give “objective” or “honest”? Or alternatively, what makes the process by which I arrive at my answer “objective” or “honest”?

Here are some prima facie plausible answers:

  1. The answer is objective & honest if it is, in fact, justified & true (faithfulness to facts).
  2. The answer is objective & honest if it fairly presents all reliable points of view, weighted by their reliability and by consensus of expert opinion (NPOV).
  3. The answer is objective & honest if it was arrived at by my best effort as an epistemic agent, either eliminating or acknowledging personal biases, conflicts of interest etc., and steelmanning alternative positions (process objectivity).
    1. People who have read Hanson or Taleb may want to turn the “no conflicts of interest” part on its head and assert a need to make a bet or to have “skin in the game”; i.e., a real-world incentive aligned with truth-seeking.
    2. Some have suggested discussing updates on evidence, rather than discussing posteriors directly.
  4. All possible answers are necessarily factional. “Objectivity” is not a coherent goal, but the most honest answer simply presents my factional view and reasons for it, while neither attempting to conceal which faction I belong to or the existence of other factional views, nor making special efforts to do them justice (factionalism a la Moldbug).
  5. The answer itself is the wrong level of analysis; you would be better off scrutinizing whether the answerer is an epistemically virtuous and responsible person (virtue epistemology).

I’m not really thrilled with any of these, but I don’t have a great alternative to offer up. (1) is charmingly simple, but too outcome-oriented; I don’t want to condemn a wrong answer that’s due to bad epistemic luck. (2) is exploded by the need to cash out “reliable” and “expert” and “consensus” in ways that aren’t blatantly factional. (3) is the position I am most attracted to (is that too obvious?). My problem with it is not its unattainability – after all, this definition is only meant to be an ideal to aim at. Rather, I fear that each additional attempt to eliminate bias represents another free variable for Rationalization to play with, in service of the Bottom Line. (4) seems unattractively defeatist or self-serving, and “denies the phenomenon” – I can remember occasions on which I believe people were genuinely objective and honest in their presentation of evidence/beliefs to me, although it’s hard to put my finger on what convinced me of this. (5) comes in second place, but I’m skeptical that people are reliably epistemically responsible from day to day or across domains.

What do you think? I suppose defining objectivity was just a jumping off point (perhaps an excessively abstract one); I’m more interested in the conditions under which you are willing to trust somebody’s statements on a controversial question.

Considerations on reasoning by anecdote

This post is thinking out loud, catalyzed by some stuff Rev said on Twitter. Partially digested, so quality is low, but I think the topic is important.

Anecdotes are much-maligned as sources of reliable information. “Anecdote is not the plural of data!”

Perhaps too maligned. Anecdotes *from a trusted source* – that part is necessary – can often answer a hidden query much more effectively than systematically collected data.

For example, if I look up crime rates while deciding whether to move to a new city, implicitly I am asking “Will my probability of being victimized rise when I move to this city?” But the crime stats don’t directly tell you that. Maybe there are tons of violent incidents per capita, but basically all of them are gang-related and geographically isolated. Or maybe this city is reporting “incidents” and the other city is reporting “criminal charges”, or one city decided to use geometric means for some bizarre reason. Or one of 100 other such stories, many of which aren’t easy to rule out especially if you didn’t even think of them in the first place. Of course, in receiving the stats from some authority like law-enforcement or a social science researcher, it’s a huge leap to even trust the source to be honest in the first place!

On the other hand, consider a friend in the candidate city who tells you they’ve been mugged and they know two others who have. Not a huge sample, but one targeted to your reference class with much better precision than any systematically gathered data. There are sources of bias here too (maybe your friend has a habit of midnight strolls through Flea Bottom) but they are usually easier to notice and correct for. With systematically collected data, on the other hand, it is especially true that you don’t know what you don’t know.*

There are huge failure modes involved in reasoning via anecdote, which the biases & heuristics literature goes into in great detail. I will not recapitulate them, on the assumption that readers are familiar with them. The evidence type that is consistently worst is the filtered anecdote. This is the one that all your friends are sharing on Facebook. “Fundamentalist embroiled in bear-baiting scandal.” You are hearing about this because somebody doesn’t like fundamentalists, and for no other reason. Zero epistemic content.

I think I am fonder of reasoning by anecdote now because I’ve been burned quite a few times by data that turned out to be (a) lies, (b) filtered accidentally, (c) filtered deliberately, (d) misinterpreted. I am now a bit paranoid about the trustworthiness of authoritative sources of information, so I’m much more interested in the relevant experiences of trusted friends and family.

It occurs to me that this entire post is meant as a gentle criticism of the way I perceive previous-me and my audience to be likely to think, not to the way most people think. This has characteristics of a bravery debate.

So to avoid bravery debate framing, here is a table showing the tradeoffs.

Pros Cons




Trustworthiness easier to verify

Subject to suite of cognitive biases

Small sample ∴ weak evidence


Data Systematic

Sometimes randomized to avoid bias

Large sample ∴ strong evidence

Questionable relevance

More difficult to interpret


Trustworthiness harder to verify

Formic metaethical handwringing

Sisters, ethics and metaethics have been challenged of late by findings of Leafcutter IV’s caste of natural philosophers. There can hardly be an intellectual caste that has not heard these findings, but let us summarize them briefly.

Sea voyages by mobile colonies have brought to light the curious geographic distribution of non-hymenopteral species, a distribution that hints at their common ancestry in a time of differently-positioned continents. For example, one finds a “belt” of fern species across the southeastern continent which, if the continent were rotated 60 degrees clockwise, ends up continuous with a similar such belt on the southwestern continent though the species are subtly different.

Meanwhile, we find a fossil record of arthropods going back to the Cretaceous, which is continuous with itself and with modern arthropods such as ourselves. In addition, evidence from the mammalian species such as whales shows “vestigial” bones serving no function for aquatic life, but homologous to similar such bones in land mammals such as camels.

These and other lines of evidence compel the belief that all life on earth, including (most controversially) Myrmecia regnans itself, shares a common ancestor in the distant past. The explanation offered by Leafcutter IV’s natural philosopher subjects is that species change by (1) random birth differences (in M. regnans, between Queens and between drones), (2) filtered by the non-random vicissitudes of life, death, and reproduction. This process has come to be called “differential regicide”.

It is not our purpose to argue in detail for this view of natural history; rather, we take it as given and point out consequences for metaethics.

Before these insights became widely known, philosopher castes had developed a special distaste for what Grasshewer XIII’s thinkers memorably called “nature fealty”. As they put it:

It often happens that we read some tract upon the topic of how fealty should be done to our Queens and their subjects, wherein the authors after a series of statements using the copula ‘is’ and speaking of states of affairs in nature, march effortlessly and without explanation into an imperious ‘must’. We must realize that the connection never follows as a matter of course but is to be argued for.

However, in formulating the case against nature fealty, Grasshewer XIII’s subjects did not reckon on how their principle might undermine the foundations of ethics in an age in which the factual origin of M. regnans’s moral feeling is beginning to be perceived.

To take a concrete example: no duty is so sacred to a caste of myrmides as that of defending their Monarch. Many ethical philosophers would call this obvious or at least axiomatic: any argument you might use to defend it would rely on axioms if anything less certain than the imperative of defending one’s Queen.

It is also easy to perceive how such a heritable sense of duty, instilled by birth in the proto-minds of elements and in the minds of the castes they supervene on, promotes the survival of Royal Lineages with strong senses of duty. We grant that this is a mere speculation on the origin of such a sense, yet it is a powerfully plausible one. Let us take it, as well, as a given.

Now the challenge which the new natural philosophy poses to ethics is to say why such an understanding does not essentially explain away Royal fealty. After all, if fealty only happens to be the most sacred duty in our minds, put there by accident by the more or less anarchic forces of differential regicide, then how can we possibly say that fealty is legitimate in any non-arbitrary sense? We could, by sheer historical accident, have gone the way of the solitary albatrosses, in which case our moral law would put the interests of a single element ahead of loyalty to any Monarch! Such imaginings are not pleasant, and frankly reek of sedition.

It is obvious that no caste of philosophers would actually endorse such a vulgar ethic, and that the eternal values of fealty and self-sacrifice remain strong. What has been dissolved, however, is the sense among philosophers that the moral law can be justified on its own terms.

Previous to our understanding of the roots of fealty in differential regicide, we could hold out hope that fealty was the result of some process of universal reason, such that any account of fealty would be intrinsically motivating. (Such argument types are common as regards matters of simple rationality: for example, anyone who understands the meaning of modus ponens cannot fail to wish not to violate it.)

Now it appears we must be content with an explanation of why elements and castes do show fealty that does not, itself, motivate fealty. This leaves the task of motivating fealty for another argument to perform, but such arguments are wanting in plausibility.

The NPOV Strikes Again

From Wikipedia (

Planned Parenthood receives about a third of its money in government grants and contracts (about $360 million in 2009). By law, federal funding cannot be allocated for abortions, but some opponents of abortion have argued that allocating money to Planned Parenthood for the provision of other medical services “frees up” funds to be re-allocated for abortion.

“Some opponents”.

As the saying goes, all money is green. In other words, money is fungible – a concept Wikipedia helpfully links to. If you give me a $10 bill and tell me not to spend it on booze, what are you really saying? That that particular serial-numbered bill should not be used in a liquor store? Okay, I’ll use the two fivers in my wallet for booze, and spend “your” $10 on the food I was eventually going to have to get anyway. And what about electronic money? Dreams about dreams.

Literally the ONLY constraint “no federal funds for abortions” puts on Planned Parenthood, is that the dollar sum of their non-abortion services must be greater than or equal to the dollar sum of federal Title X money received (assuming no other donors restrict where their dollars go). It obligates them to play a little accounting shell game, but in the counterfactual world where PP does not receive federal money, they almost certainly perform fewer abortions; thus, federal funding causes more abortions to occur.

(No comment implied on whether that is good or bad. It is certainly bad from the point of view of abortion opponents this provision is supposed to mollify.)

Critiquing “Vulgar Morality”

Adam Gurri’s piece on rejecting what he calls “Telescopic morality” (also see here and here) has been on my mind a lot lately. I happened to come across this writing at a time when life was already forcing a near-mode orientation onto me (so on the outside view, this philosophy is rather self-serving). I will join in calling the philosophical/practical orientation under discussion “vulgar morality”.

What does it entail? In a nutshell: your ethical life would improve if you focussed your attention on local (i.e., close to you in time/space/relationship) & concrete questions, at the expense of global & abstract questions.


  • Study basic personal finance before debating macroeconomics.
  • Join your condo board and change their pet policy before weighing in on geopolitics.
  • Help out a relative with their leaky toilet before trying to solve The Middle East.
  • Get out of the habit of snapping at your spouse before pontificating about optimal gender relations.
  • Make something someone is actually willing to pay you for, before saving the world for free.

Continue reading “Critiquing “Vulgar Morality””