In an upcoming book co-authored with Nate Soares, Eliezer Yudkowsky doesn’t pull any punches, promising in the title that if anyone builds superhuman AI, that ‘everyone in will die.’ But It’s more likely that rather than everyone dying, this book will go in the dustbin of history. Like most non-fiction books, the putative objective of his book is a call to action and to raise awareness–in his case, about AI risk. However, two things: the public and policy elites are already abundant...