Maybe AI will kill you before you finish reading this section. The extreme scenarios typically considered by the AI safety movement are possible in principle, but unfortunately no one has any idea how to prevent them. This book discusses moderate catastrophes instead, offering pragmatic approaches to avoiding or diminishing them.| Better without AI
Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.| Better without AI
It’s a mistake to think that human-like agency is the only dangerous kind. That risks overlooking AIs causing agent-like harms in inhuman ways.| Better without AI
Current AI systems are already harmful, and may cause near-term catastrophes through their ability to shatter societies, cultures, and individual psychologies. That might potentially cause human extinction, but it is more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. We would be wise to anticipate possible harms in as much detail as possible.| Better without AI
Most of this page absolutely agree with, but I wanted to quibble about one point:| betterwithout.ai