This article contains two sections. (1) Backup plans for misaligned AI: If we can't build aligned AI, and if we fail to coordinate well enough to avoid putting misaligned AI systems in positions of power, we might have some strong preferences about the dispositions of those misaligned AI systems. This section is about nudging those into somewhat better dispositions (in worlds where we can't align AI systems well enough to stay in control). A favorite direction is to study generalization & AI...