This review elicited a great deal of argument as to whether it was fair and/or accurate, which for all our sakes I will not rehash here. I stand by the thrust of the review, but I did make one significant mistake to which I have added a bracketed [correction]. I apologize for the error.
Phil Sandifer has written a book about Eliezer Yudkowsky, Mencius Moldbug/Curtis Yarvin, and Nick Land. See its Kickstarter page for an overview, though I would advise against giving him any more money. (Sandifer sent me a preprint copy for this review.)
I will begin by noting that Sandifer is an English major and a Marxist and Neoreaction A Basilisk defies neither stereotype. It is meandering, disorganized, and frequently mean-spirited (“Yes, it’s clear that Yudkowsky is, at times, one of the most singularly punchable people in the entire history of the species”). About half the book consists of long digressions about Milton, Blake, Hannibal, China Miéville, The Matrix, and Deleuze.
What is new here is not interesting, and what is interesting is not new. I do not recommend the book.
With that out of the way, my primary interest here is the titular basilisk.
Fortunately, I don’t need to say very much; Sandifer does not understand the decision theory involved and his discussion of information hazards never strays beyond the literary.
For one, Sandifer’s explanation of timeless decision theory suggests it relies on “intense contortions of the many-worlds interpretation of quantum mechanics”, which it does not. (It doesn’t rely on physics at all; it’s math.) Yudkowsky is a vocal proponent of many-worlds, and timeless decision theory has applications in a quantum multiverse (or any other kind of multiverse), so perhaps this confusion is understandable. [The passage in question concerned Roko’s Basilisk, not TDT. Like TDT, upon which it relies, Roko’s Basilisk does not require any particular theory of quantum mechanics.]
Similarly, Sandifer introduces Newcomb’s Problem as “a thought-experiment version of the Prisoner’s Dilemma”, which is again not quite right. Sandifer goes on to dissolve Newcomb’s problem:
The obvious solution is to declare that magical beings that can perfectly predict human behavior are inherently silly ideas, but since Yudkowksy wants to be reincarnated as a perfect simulation by a futuristic artificial intelligence he doesn’t think that. Instead he sees Newcomb’s Problem as a very important issue and creates an entire new model for decision theory whose only real virtue compared to any other is that it only has one correct answer to Newcomb’s Problem.
The rest of Sandifer’s discussion of decision theory continues in this vein, never rising above psychoanalysis and tired religious analogies.
I’ll stop here, this is already more discussion than the book deserves.