This review elicited a great deal of argument as to whether it was fair and/or accurate, which for all our sakes I will not rehash here. I stand by the thrust of the review, but I did make one significant mistake to which I have added a bracketed [correction]. I apologize for the error.
Phil Sandifer has written a book about Eliezer Yudkowsky, Mencius Moldbug/Curtis Yarvin, and Nick Land. See its Kickstarter page for an overview, though I would advise against giving him any more money. (Sandifer sent me a preprint copy for this review.)
I will begin by noting that Sandifer is an English major and a Marxist and Neoreaction A Basilisk defies neither stereotype. It is meandering, disorganized, and frequently mean-spirited (“Yes, it’s clear that Yudkowsky is, at times, one of the most singularly punchable people in the entire history of the species”). About half the book consists of long digressions about Milton, Blake, Hannibal, China Miéville, The Matrix, and Deleuze.
What is new here is not interesting, and what is interesting is not new. I do not recommend the book.
With that out of the way, my primary interest here is the titular basilisk.
Fortunately, I don’t need to say very much; Sandifer does not understand the decision theory involved and his discussion of information hazards never strays beyond the literary.
For one, Sandifer’s explanation of timeless decision theory suggests it relies on “intense contortions of the many-worlds interpretation of quantum mechanics”, which it does not. (It doesn’t rely on physics at all; it’s math.) Yudkowsky is a vocal proponent of many-worlds, and timeless decision theory has applications in a quantum multiverse (or any other kind of multiverse), so perhaps this confusion is understandable. [The passage in question concerned Roko’s Basilisk, not TDT. Like TDT, upon which it relies, Roko’s Basilisk does not require any particular theory of quantum mechanics.]
Similarly, Sandifer introduces Newcomb’s Problem as “a thought-experiment version of the Prisoner’s Dilemma”, which is again not quite right. Sandifer goes on to dissolve Newcomb’s problem:
The obvious solution is to declare that magical beings that can perfectly predict human behavior are inherently silly ideas, but since Yudkowksy wants to be reincarnated as a perfect simulation by a futuristic artificial intelligence he doesn’t think that. Instead he sees Newcomb’s Problem as a very important issue and creates an entire new model for decision theory whose only real virtue compared to any other is that it only has one correct answer to Newcomb’s Problem.
The rest of Sandifer’s discussion of decision theory continues in this vein, never rising above psychoanalysis and tired religious analogies.
I’ll stop here, this is already more discussion than the book deserves.
5 thoughts on “There Is No Basilisk In “Neoreaction A Basilisk””
Um. That quote about the many-worlds hypothesis is on page 39 of the manuscript I sent you, in a discussion of Roko’s Basilisk (which was explicitly framed in quantum terms). The first mention of Timeless Decision Theory is nearly seventy pages later. To say that it’s from my “explanation of timeless decision theory” is flatly untrue.
I responded to this on my personal tumblr. http://theungrumpablegrinch.tumblr.com/post/144019835594/there-is-no-basilisk-in-neoreaction-a-basilisk
Why do you think Newcomb’s paradox is clearly not an acausal trade? Can’t it be seen as an acausal trade in which I agree not to take the $1000 if Omega agrees to put a million dollars in the other box? And Sandifer is at least correct in the second of the two quotes you give on the tumblr there, the one saying that one of the primary motivations for developing timeless decision theory was to deal with Newcomb’s paradox, see http://intelligence.org/files/TDT.pdf — so if you disagree with Sandifer, does that mean you’re disagreeing with the first quote saying that “the point of TDT is that it enables acausal trade”? What would you say is the central “point” of TDT if it has nothing specifically to do with acausal trade?
Finally, the point you make on the tumblr that “Acausal trade does not require a quantum multiverse, any of Tegmark’s other multiverse levels are sufficient” seems like trying to get him on a technicality–if it requires a multiverse with parallel versions of each person, is it really so important whether these versions are in quantum superposition or in different locations in the multiverse of eternal inflation?
This is a good comment!
I’d argue that Newcomb’s isn’t acausal trade because Omega/the predictor as typically described lacks a stake in the outcome. If Omega is an agent which has preferences about my choice, then it wins by default, being a perfect predictor. Only moderately confident this stance is correct.
Yes, TDT was developed primarily in response to Newcomb’s problem, and one of its significant features is that it can permit acausal trade. I was merely making the somewhat pedantic point that “TDT was developed just to solve X” and “TDT was developed just to solve Y” can’t both be true.
Unless X=Y, as you’ve argued.
As to the final point, I was wrong: acausal trade doesn’t require a multiverse at all. It may be the only way to trade between universes, but nothing prevents it from occurring within a single universe as well.