An illustration of a sequence of events where rogue replicating agents emerge and cause harm.| metr.org
A Proactive AI Policy Agenda| www.hyperdimensional.co
How speculation gets laundered through pseudo-quantification| www.aisnakeoil.com
Coherent Extrapolated Volition was a term developed by Eliezer Yudkowsky while discussing Friendly AI development. It’s meant as an argument that it would not be sufficient to explicitly program what we think our desires and motivations are into an AI, instead, we should find a way to program it in a way that it would act in our best interests – what we want it to do and not what we tell it to. Related: Friendly AI, Metaethics Sequence, Complexity of Value > In calculating CEV, an AI woul...| www.lesswrong.com