Adam Gurri’s piece on rejecting what he calls “Telescopic morality” (also see here and here) has been on my mind a lot lately. I happened to come across this writing at a time when life was already forcing a near-mode orientation onto me (so on the outside view, this philosophy is rather self-serving). I will join in calling the philosophical/practical orientation under discussion “vulgar morality”.
What does it entail? In a nutshell: your ethical life would improve if you focussed your attention on local (i.e., close to you in time/space/relationship) & concrete questions, at the expense of global & abstract questions.
- Study basic personal finance before debating macroeconomics.
- Join your condo board and change their pet policy before weighing in on geopolitics.
- Help out a relative with their leaky toilet before trying to solve The Middle East.
- Get out of the habit of snapping at your spouse before pontificating about optimal gender relations.
- Make something someone is actually willing to pay you for, before saving the world for free.
One obvious question is: why does this even need to be said? People don’t usually need reminders to be partial to their own interests & those of their family. I am not sure of the answer, although part of it must be that whenever people go into ultra-far mode, politics is usually involved, and politics is catnip for chimps . Also, expensive knowledge about abstract, global issues is the way people of a certain upbringing signal their intelligence and righteousness to each other. Gurri thinks that there is something pathological in all of this.
Obviously, I am very sympathetic. What interests me today is what can be said against this view.
Objection #1: “Vulgar morality” is just another weapon to whack political opponents over the head with. (Compare “confirmation bias”, of which about 8/10 tokens are garden-variety hackery.) Thus the usual special pleading formula: he is a ridiculous telescopic moralist; you should focus on local issues; I have a special connection with Ukraine. If you are to avoid this charge, I had better not catch you opining on the same global, distant, abstract issues you chide others for talking about.
Objection #2: There aren’t really any significant attention tradeoffs here. Reading Paul Krugman’s blog doesn’t prevent me from learning about index funds or budgeting. I do the former out of sheer interest, and it’s perfectly fine for people to have interests that are “far mode” (anyway, could anything possibly be *more* far-mode than theorizing about far mode and near mode?). Those who don’t have any such interests are sometimes called “parochial” or “ignorant”, and in my experience they are rather boring.
Objection #3: In practice, this idea counsels defection in various important social coordination games. For example, the surveillance state creeps into our lives more and more because few have a concentrated, salient interest in opposing it. But given the small impact of surveillance on you, and the expected small impact at the margin of making your voice heard, inaction is rational by the lights of “vulgar morality”. This is just another name for disengagement with important issues, in favour of unapologetic selfishness and short-run orientation (all this talk of “family, friends and community” sounds like a whitewash).
Objection #4: So-called telescopic morality is actually underrated. Part of the reason for this is that the local problems of rich, educated westerners are really expensive to improve (despite better local knowledge etc.) as compared with many “telescopic” causes. For example, GiveWell gave the expected cost per death averted for the Against Malaria Foundation in 2012 as approximately $2300. The same sum spent in the developed world would probably not buy so much low-hanging fruit (I invite you to consider how much money it would take to accomplish something really significant in your life or that of a relative or friend). Telescopic morality is leveraged by differences in marginal utility into something that can be quite powerful.
Can you think of any other unique objections to the vulgar morality/local ethics/bourgeois virtue framework?
As an aside, it appears Rev is about to discuss roughly the same topic. Sorry for budging.
17 thoughts on “Critiquing “Vulgar Morality””
I’m pretty convinced that public health projects are an excellent example of telescoping morality accomplishing (enormous) net positives. I think there are mathematical and heuristic reasons why we should expect that to be true. I suspect the success of public health projects in the late 1800s through about 1950 drove a lot of the development of the modern bureaucratic state. It’s plausible to me that most non-public-health projects, and the general drive to telescoping morality in the progressive frame, have been net harmful.
LikeLiked by 1 person
Great post, all powerful critiques. There is definitely a pejorative tone to telescopic morality, one that I don’t subscribe to. My version is a bit more descriptive and psychological. I see there different construal levels for ethical reasoning – near and far. My problem isn’t with abstract or global moral problems per se, but with misapplication of the near mode on far problems, or vice versa. For instance, our moral instincts for a family are fine for a family, but misapplied when instituted in a paternalistic or fascistic regime. It’s a kind of category error. We have Good Samaritan style moral instincts for living in close quarters that also misfire when people feel the urge to go on humanitarian missions overseas. Making a global impact turns out to require abstract level thinking as well, like being in the position to influence policy in a pro-development direction, or by inventing a vaccine. The instinct is to go direct to the person and make them a shelter, but the effectual route turns out to be one involving a decade of education and a bunch of clinical trials.
LikeLiked by 1 person
Note that Adam Gurri wrote an excellent response to my post here.
LikeLiked by 1 person
…isn’t that the whole point of GiveWell, and “effective altruism” in general? Why would you care about them if you don’t agree with them that this is key?
Let us distinguish 3 common ways of knowing that your interventions are doing good.
(1) Direct knowledge of people involved, who have good direct info. I fix your toilet, you thank me, and next time I come over, I notice that the toilet still flushes.
(2) Assurances from people involved, who have relatively little direct info. Engineers Without Borders built a well. It was working OK when they left, and the people there seemed happy.
(3) Statistics from people involved, who have good indirect info. AMF distributed malaria nets and then monitored trends in reported malaria cases at local hospitals, as well as conducting random surveys of users.
I think (1) > (3) > (2), at least on the criterion of “certainty that you’re helping”. The GiveWell approach is an improvement over standard fire-and-forget charity, but not as good in this respect as direct local knowledge.
Direct local knowledge does have failure modes, obviously. And you can still bring up marginal utility, which pushes you in GiveWell’s direction.
How about this objection- this argument is rarely offered in good faith. It is instead an effort at shutting people up. When was the last time you heard someone make a telescopic morality critique of someone, and then retract it when the target turned out to have engaged in the activity in question?
“What the Middle East needs is…”
“You probably shouldn’t try to solve the Middle East Crisis until you’ve figured out how to patch up your marriage.”
“What? My marriage is fine! You’re thinking of my brother, he’s getting a divorce.”
“Oh, I’m sorry, carry on then. You’re precisely the kind of person we need working on this.”
Never. Happens. Telescopic morality is a way of manipulating people via the ethical norms of humility. The implicit message isn’t the explicit one- it’s something more like, “How dare YOU think you can have something to contribute on these big and important issues! Look at you! You’re so much less than you should be.”
LikeLiked by 1 person
That’s basically what I meant by my objection #1. You could make a similar charge against the use of “signalling” as a critique of certain rhetoric.
Any argument type can be used for epistemically bad purposes though, and “telescopic morality” is certainly pointing at very real phenomena (I think immediately of the endless depths of passion you see about Israel & Palestine from people with absolutely no connection to or expertise about them).
I am also inclined to say that, like knowledge about cognitive biases and heuristics, it’s usually best directed inward. In my circles it is Not Done to call anyone but oneself biased, at least not without pretty good evidence. In any event, part of the reason the “telescopic morality” framework resonates with me is because I recognize something of my former self in it.
LikeLiked by 1 person
If I could double like this, I would.
True story: Taleb blocked me on Twitter because I was making the case for something like your circles’ ethics.
LikeLiked by 1 person
Twitter with Taleb is like tea with Torquemada.
LikeLiked by 1 person
Examples 2 and 4 even instinctively look WAY more valid to me, and this seems to be borne out by how isomorphic/easy to extrapolate/(i am not englishing good).
1 is unreflexively anti-Keynesian and would only learn that it’s anti-Keynesian when it’s already too primed by personal experiences. 3: People can have skillsets that are good for contributing to epic mass-scale projects and far-mode undertakings, but awful at dealing with local annoyances; without leveraging such, there could in fact be no Modernity, and Modernity was awesome (well, for its time). Likewise for 5, getting paid has very little correlation with producing objective value, in large part because of systemic forces that determine what counts as work and who deserves to get paid for it. (Insert the usual Marxist and feminist examples).
But yes, I would indeed take 2 and 4 as solid advice in personal life; the problem with this genre of writing is that all too often it uses things like 2 and 4 as the motte and things like 1/3/5 as the bailey – sometimes with specific malign intent in mind, as mentioned above.
Yes, good point. I think this is another unique objection to Gurri’s framework.
I’m having trouble generating Marxist examples that seem valid, although the feminist ones are clear. But yes, meant only as a ceteris paribus indication that your actions may not be useful.
(I followed a link here from slatestarcodex. Hope commenting is appropriate.)
The key to the Vulgar Morality argument seems to be that you can’t do anything about the big issues, so your involvement there is only posturing. To quote your second link:
I say this is the key to the whole thing because without this the vulgarists are trying to argue that the suffering of Darfurians or gays has no moral weight. I think very few people would take that seriously.
So, given how important this point is, what are the arguments in favor? Well, there’s a list of things people do (attend an anti-poverty rock concert, drive a prius) that don’t work. And there’s an intuitive sense that the world is big and an individual is small. As far as I can tell, that’s it. These arguments don’t hold much water.
The fact that many have failed has no bearing on whether you can succeed. The vast majority of great successes are preceeded by many failures.
As for the scale, there are plenty of cases of the world being changed by individuals. Let us assume that our generation has a Borlaug and a Ghandi. The odds that either of them is you is 1 in 7 billion. Could be worse. That’s 33 bits. Now aim a little lower. Be part of the team that invents the cure to a not particularly famous disease that still effects millions. Write a database frontend that humanitarian response teams can coordinate with. Provide the encouragement that prevents burnout from claiming a particularly skilled diplomat who will go on to prevent the next genocide. These are still pretty big contributions. How many people act on that scale? Thousands? Maybe 20 bits needed. Take 3 free bits for living in the first world with the resources that entails and some more for being the sort of person who reads moral philosophy in the first place (i.e. intelligent, agenty, trying to be good — probably at least 2 stdevs of each). The claim that I can succeed is no longer extraordinary, and no longer requires extraordinary evidence.
There’s a rationalist proverb: before you call anything impossible spend five minutes by an actual clock thinking of ways to do it.
Darfur? It’s amazing what Doctors Without Borders could do with the cost of the aforementioned prius.
Tibet? I have nothing offhand but you might ask telecomix if they have any grunt-work that needs doing.
I’ll skip abortion because there’s no consensus here about what improving matters would mean.
Gay rights? A lot of the remaining problems revolve around personal (rather than institutional) discrimination. A matter of actually changing hearts and minds. Exactly the sort of problem a skilled writer could address, at least one who was willing to tactically seek the relevant audience.
I don’t claim these are the best things you could do. In fact, I spent less than five minutes thinking of them. Nor do I claim these are the best problems for the original author to work on.
But to declare all problems impossible without any serious attempt to work on them isn’t humility; it’s sloth. And it gets people killed.
LikeLiked by 1 person
Hi Daniel! Yes, we absolutely welcome comments!
Adam Gurri attempted to answer roughly your objections in a piece he linked me to after I wrote this post, here.
My personal reaction to your comment, which Gurri may or may not agree with, is that there is a fair bit of low-hanging fruit for far-mode altruism (e.g., malaria) but we shouldn’t exaggerate it.
I am also skeptical about (a) the marginal impact of far mode pursuits (I suspect it’s very hard for one person to make a difference), (b) the epistemic justification of such pursuits (how sure are we that they are doing significant amounts of good, or that they aren’t actually harmful?).
That last point is quite relevant. Consider Rachel Carson’s 1962 book “Silent Spring”, which made a case for banning DDT because of its impact on wildlife. It largely succeeded in doing so, but others have argued that the cost was unavailability of a cheap defense against malaria-carrying mosquitoes.
Now, I haven’t spent the time to be absolutely sure that that analysis is correct, but the fact that it’s even in question is revealing. One can easily imagine going from “Banning DDT saved the environment!” to “Oh, maybe banning DDT contributed to millions of malaria deaths and didn’t do much for the environment, oops” in the space of a day. I notice many far-mode moral projects are like leveraged investing; when it succeeds it succeeds big, when it fails it fails big. Because the philanthropists are far away without much skin in the game, they aren’t incentivized to be as careful as people would be on a more local level.
Gurri also likes to bring up, and I agree, the great internet outrage machine we see blundering around these days, set in motion by a fiery moral narrative rather impoverished in facts. Frankly, the outrage machine is Bad even when it chooses a good cause. Its members really need to think about their marginal impact and about their state of knowledge.
One last thing: obviously, your philosophical approach to ethics may differ from mine, but I do think that to the extent we allow far-mode moralizing to significantly displace our more personal social pursuits, we are just being bad parents/siblings/neighbours. I don’t care remotely as much about any Darfuri as I do about my kid, and I think that is right and proper. (It still may be worth trying to help them if I have fulfilled my more personal duties, and I am sure I know how, and I’m sure my marginal contribution is significant, and I’m sure it won’t all backfire.)
I don’t care remotely as much about any Darfuri as I do about my kid, and I think that is right and proper. (It still may be worth trying to help them if I have fulfilled my more personal duties, and I am sure I know how, and I’m sure my marginal contribution is significant, and I’m sure it won’t all backfire.)
I’d argue, in fact, that the second sentence is a big part of the correct justification for the first sentence.
Yes, there is a certain wisdom of decentralization embodied in people looking after their own, such that an impartial Utilitarian might want to think twice before changing that.
I don’t really think I owe anybody a justification of my partiality (at least within reason), but I can see the appeal of the idea.
It seems uncontroversial to me to say that, ceteris paribus, humanitarian efforts are best directed towards situations where one has direct knowledge and ability to act. Helping a child in Darfur is much more difficult than helping one’s own child.
Say you’re on a shooting range and there are two targets, one 50 feet away and the other 500 feet away. Both are worth the same number of points. The correct course of action is obvious.
LikeLiked by 1 person
Heello nice post